Jack Dorsey, CEO of Twitter, at Ryerson University in Toronto, Ontario, Canada, 2 April 2019; (Cole Burston/Bloomberg via Getty Images)

The Silicon Valley English Dictionary

Twitter’s engineers agree that prejudice is hard-coded in our language

Artillery Row

The natural reaction to someone telling you how to use language is very often to tell that person to take a long walk off a short pier. We take language personally. Suggesting even slight amendments to someone’s grammar or vocabulary is likely to elicit a reproach. It’s probably better not to be too proscriptive about language. Let sleeping dogs lie, we tell ourselves, and with good reason.

Twitter, whose CEO Jack Dorsey recently committed to giving $1 billion of his own wealth to charitable causes, sees things differently. The company announced last week that it will abolish the use of terms including “master”, “slave”, “blacklist”, “sanity check” and “dummy value”.

Each of these terms had a technical meaning in the computer code that allows Twitter to function, so the process of transformation will involve more than just referring to things by new names. Such changes will necessitate the editing of Twitter’s vast library of existing code, consuming thousands upon thousands of man hours (henceforth to be referred to as “person hours” or “engineer hours” to avoid gendered language).

Though the changes were first mooted before the recent Black Lives Matter protests, the argument for them gained impetus from the movement. Getting rid of the old terms falls under the category of dismantling the structure of oppression. In the eyes of some of Twitter’s engineers, the programming code they used was racist, sexist and prejudiced against disadvantaged groups.

One good question would be whether the money would not have been better spent fighting more concrete examples of social injustice. According to the UN’s International Labour Organisation (ILO), there are currently 40 million people held in some form of modern slavery, over 70% of them women. The ILO calculates the value of this labour as $150 billion annually, $45 billion of that in EU and developed countries. Not only is slavery alive and well, it’s highly lucrative. By comparison, programming terminology seems negligibly important.

And yet, it’s hard not to be sympathetic to African American coders who feel offended by the use of “master” and “slave” as terms to describe one process controlled by another. One of the two Twitter employees who spearheaded the proposed changes said the tipping point for him was an email with the subject line “automatic slave rekick”. In Silicon Valley, where African Americans are underrepresented, it may grate, perhaps as the only black member of your team, to encounter this kind of language.  And of course it’s in Twitter’s interests to look after its employees. We ought to say good for Twitter for trying to make the company more inclusive.

Other companies in the tech space are pursuing similar initiatives. Microsoft’s Github has abandoned the master-slave paradigm, as have teams at Google for their Android operating system. Even a bastion of progressive values such as JPMorgan Chase, the second most fined bank in the world (over $40 billion and counting), is getting in on the act: their programmers too have abandoned master-slave.

But there are potential pitfalls in entrusting the policing of the English language to the programmatic minds of software engineers. Each of these companies has also done away with the terms “blacklist” and “whitelist”, commonly referring to lists of external websites either automatically denied or approved access. This has been done on the basis that the terms are coded forms of racism. In the judgment of these engineering departments the association of something bad with black and something good with white serves to reinforce harmful racial stereotypes.

 If “blacklisting” is bad then being black must be bad

In trying to ban a word with negative connotations just because it has “black” in it, the engineers have created a flawed logic which has likely arisen from the very nature of coding. A coder must define a variable before it can be used within a computer program. That definition must be singular and consistent so that, when the variable is referred to, the program knows what information to call upon. One thing cannot mean two different things. A variable like “black” can only have a single definition. And, if to “blacklist” something is a bad thing then “black” must, logically, be bad. In the real world though, one word can have many different meanings. These programmers are confusing two very different types of blackness: one to describe a lack of light and colour, and the other to describe people of colour.

The association of light with knowledge and darkness with ignorance is not a construct of Western imperialist thought. It exists across cultures. And it has nothing to do with skin tone. Obliterating this metaphor would involve us, much like the engineers scanning their code, in going through the entire English language to eliminate any linguistic associations that include the idea that light is good and darkness is not.

This is not as far fetched as you might imagine. Twitter has also banned the use of “sanity check” to describe a process of checking whether code will run correctly. The word sanity comes from the Latin “sanus”, meaning “sound” or “healthy”. God forbid anyone should consider sanity a good thing. Twitter is trying, misguidedly, to be inclusive of anyone who has suffered mental illness. It seems doubtful this is a change that people suffering from mental illness either wanted or will appreciate. Similar logic must have been at work in the decision to eliminate “dummy value”, presumably on the grounds it offends deaf people and those who cannot speak.

The argument for getting rid of something should not be: is there a single person who it might currently upset?

And of course, the greater the number of disadvantaged groups that are catered to, the harder the changes will be for opponents to resist. But that logic leads inevitably to the process of trying to find offence on behalf of others, often where none exists. Twitter’s litmus test should not be: can someone conceivably be offended by this if they try hard enough? Because we know that such a person now surely exists for almost every scenario.

The reason all this matters is that bans at Google, Microsoft, and even on Wall Street for “whitelist” and “blacklist” may lead other companies and institutions to follow suit. And when they do, expressions which in fact have no racial connotations may become designated as racist. Needless to say, these are the kind of completely counterproductive measures that hinder the very causes they claim to champion.

Who do these changes benefit? They will not be seen by any users or customers of these companies, only the people working at them: privileged and well-paid employees in the tech sphere. Indeed, they would likely pass unnoticed were they not announced to the world.

Who do these changes harm? Genuine victims of racism and discrimination, undermined by association with these kinds of misconceived and highly theoretical arguments, who must be frustrated to be offered such meaningless tokens instead of the genuine change they want to effect.

We are told that people in Silicon Valley are waking up to the idea that programming languages (and by extension English and other languages) contain inherent systemic racism. What are we to do? Tell them to take a long walk off a short pier.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover