Picture credit: Thomas Krych/SOPA Images/LightRocket via Getty Images
Artillery Row

The worm in tech

Technology can magnify the effects of human evil

“Every once-in-a-while a revolutionary product comes along that changes everything.” These were some of the opening words of Steve Jobs’ 2007 launch of the first iPhone. Unlike most who claim their product will change the world, Jobs was right. Since that year Apple’s annual launches have become a mainstay of the global tech calendar. Presentations like this are becoming more and more common in this industry. 

Imagine if, instead of a phone, a man was dragged on stage. Skinny and dirty. Imagine if, instead of the unassuming Jobs, a man in a military uniform strides into centre-stage. He tells the world that he too has a technology that will change everything. His technology allows you to monitor and control an entire population, even in their own homes. It allows you to pick faces (like that of the man’s cowering at his feet) out of a crowd on the basis of their ethnicity or emotional state. He proudly proclaims that this technology scans all the data on an individual’s phone without them knowing, it assigns them a risk level, it tells police when to arrest them — it even interviews them, examining their physiological responses to questioning.  

This is the daily reality for Uyghur people in the Xinjiang region of China

This is the daily reality for Uyghur people in the Xinjiang region of China. The Chinese Communist Party have created a fully-automated system powered by an increasingly-sophisticated Artificial Intelligence that categorises risk, gathers evidence, interviews and prosecutes all without human intervention. What’s more the Uyghurs are then being showcased to every rogue government in an attempt to sell this technology across the world. They are the involuntary beta-testers, the lab rats, of increasingly invasive and powerful technologies. And the tech is selling. Allegedly, Chinese surveillance technology that has been developed in Xinjiang is being used in Myanmar, Iran and Russia and well as countless other countries. As China exports their tech, they export their methods.

As the self-proclaimed “tech bros” (and Kamala Harris) met in London last week, AI was once again thrust to the forefront of our national conversation. Although Prime Minister Sunak is hoping for a COP-style, cross-government resolution, this will also, undoubtedly, be used as a time to reassure us all that, as Tesla tycoon, Elon Musk put it, the risk of us all being killed by AI is “low”. Already we are being assured that its main role will be to make menial tasks that little bit easier. If ethnic cleansing is a menial task, then AI has already proven itself keen to lend a hand. Surely, we are closer than we think from the job of executioner being outsourced too.

The main concern that our tech leaders currently have is the fear of censorship and disinformation, particularly on social media. Balancing the right to freedom of speech with the need for a modern populace to be well-informed is not going to be easy. Personally, I tend towards greater freedom of speech. That being said, let me proffer a cautionary note. 

In parts of the world, Facebook acts as the de facto internet for most of the population; India is one of these countries. With the rise of Hindutva extremism, online disinformation is beginning to result in real-world violence against minority Christian and Muslim populations. The militant RSS and the ruling BJP party have become experts in shaping public discourse through the proliferation of disinformation. On WhatsApp, videos are frequently shared which are falsely claimed to show Christians forcibly converting Hindus. One teenager, Sukumar, was abducted, crushed to death and dismembered in 2020, after social media reports stated that he had used “Christian witchcraft” to kill Hindu inhabitants of his village  – this was even reported as being the case in the mainstream media. The inhabitants of his village, in fact, died from consuming tainted water. Another man, Ravi, was murdered by a “Gau Rakshak” (a cow protector) after it was alleged that he was involved in the sale of an Ox on social media. He was attacked by a mob of over 100 people and the Christian women in his village were publicly sexually assaulted, to “shame” the community. The first Ravi’s wife knew of the attack was a video, uploaded to social media. 

Perhaps this kind of disinformation has always existed and will always find a way of spreading. Censorship, however, can be just as harmful. In China, video calls in which key words like “Christ”, “Jesus” or “Bible” are identified are shut down. Likewise, the Chinese characters for “Christ”, “Jesus” and “Christian(s)” were changed or simply removed from the online bookstore of the Chinese app WeChat in 2020. In Pakistan, online censorship has been used to try and scrub any evidence of Ahmadi faith and culture from the internet altogether.

We might do well to focus on the problems AI already presents, before planning for a Terminator-style apocalypse (as Jordan Peterson would say, “first, clean your up room!”) China, who are present at this conference, will continue to develop more dangerous and Orwellian technologies and will continue to export them to rogue elements worldwide who will use them to destroy minority groups. Imagine what history’s least savoury megalomaniacs could have done with the kind of technology that will be emerging in the next decade. It doesn’t bear thinking about. The West must act now — industry must act now to protect the most vulnerable from the most powerful.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover