AI: is the end nigh?

Evaluating the threat to mankind from artificial intelligence

Features

This article is taken from the August-September 2023 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.


Does AI pose a mass extinction threat? Or is this concern merely the latest manifestation of humanity’s need to frighten itself witless?

As the year 2000 approached the world fretted over the Y2K or Millennium Bug. Neurotics and newspapers alike predicted that power plants, banks and planes would fail as 1999 became 2000, ushering in pandemonium and death. John Hamre, the US Deputy Secretary of Defense from 1997 to March 2000, foresaw that “the Y2K problem is the electronic equivalent of the El Niño and there will be nasty surprises around the globe”. There weren’t and there was little difference in the outcome between countries which invested millions of dollars and countries which invested none.

In the 23 years since then, we’ve gone from “computers are so stupid the world will end” to “computers are so clever the world will end”. But the hysteria remains the same.

The latest apocalyptic horror on the heels of Covid-19 and climate catastrophe is whether, “non-human minds” as Elon Musk pitches it, “might eventually outnumber, outsmart, obsolete and replace us”. He co-signed an open letter with other tech leaders warning that machines might “flood our information channels with propaganda and untruth” (in contradistinction to humans doing so). 

The letter set out “profound risks” to society, humanity and democracy, which in turn led to a multitude of hyperbolic headlines such as the BBC’s “Artificial intelligence could lead to extinction, experts warn”. The Centre for AI safety warned starkly that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. 

AI does pose threats, as well as tremendous opportunities, but the threats may be quite different to the doom and gloom headlines. First, there is no certainty that AI will develop the capabilities that we are being extravagantly warned about. Even the Future of Life Institute which published the open letter admits that super-intelligence is “not necessarily inevitable”.

Thus far, AI has had a free ride on human achievement and creativity. There is no AI without humans. There is no generative language AI without human language. There is no writing in the style of John Donne, without John Donne. In fact, ChatGPT and Bard do a terrible impersonation of metaphysical poetry, although their limericks are passable. There is no AI art, music, novels without everything that has gone before. In short, the achievements are still ours. 

The panic is focused on what might be. AI is an extremely advanced tool, but it is just a tool. It is the humans holding the tools with whom we need to concern ourselves. New technology has sometimes resulted in some horrible uses, such as the gas chambers. New communications technologies have been channels for propaganda. But they were not the propaganda itself. Nevertheless, some threats are real. 

Firstly, AI systems are now becoming human-competitive at general tasks. IBM’s CEO, Arvind Krishna, recently told Bloomberg that he could “easily see 30 per cent of jobs getting replaced by AI and automation over a five-year period”. And according to a report by Goldman Sachs, AI could replace the equivalent of 300 million full-time jobs. 

It turns out the very IT, software, media, creative and legal people now worried about AI, might find themselves facing increased competition from AI. For example, Chat GPT will help people with average writing skills produce better articles, which will probably lead to more competition and lower wages. 

AI is also a brainwasher’s dream. Advocates for regulation want you to think that AI is about to discover sentience and write new religious tomes, invent propaganda and disrupt elections, all because it wants to, for its own devious reasons. In fact, the brainwashing threat is quite different.

AI can be sedimented with psychological techniques such as “nudging”. Nudging involves influencing your behaviour by altering the environment, or choice architecture, in different ways, by exploiting our natural cognitive biases. Algorithmic nudging is a potentially potent tool in the hands of paternalistic libertarian do-gooders or authoritarians. 

Nudges will be able to scale completely unlike the real world counterpart, and at the same time be completely personalised. Facebook knows you better than anyone, except your spouse, from a mere 200 likes splattered on its pages, even to the extent of knowing your sexuality. As I warn in my book Free Your Mind, if you don’t want AI to know you better than anyone else, tread lightly on social media and use it mindfully.

It is interesting that the threat of AI is likened to “nukes”, yet the academics have been writing for years about algorithmic nudging which presents clear ethical dilemmas about consent, privacy and manipulation, without clamouring for regulation. 

Algorithms already create completely personalised platforms

Algorithms already create completely personalised platforms. Twitter is often described as a public square, but it more closely resembles a maze, in which the lights are off and the walls move, seemingly arbitrarily. Aside from the disturbing evidence presented in the release of “the Twitter Files” particularly concerning how Twitter “deamplifies” content it does not like, anyone using the platform a lot will attest to the inexplicable rise and fall of follower counts and the suppression of juicy tweets. It seems content is pushed up or down based on the preferences of Big Tech and government agencies, and this is made effective through the capabilities of algorithms. AI is killing transparency and pluralism.

In our relationship with AI, our biases create danger. The “authority bias” means we see AI as more powerful than it is, and therefore we are more likely to succumb to manufactured and exaggerated fears. We anthropomorphise AI. Google engineer, Blake Lemoine was prepared to lose his job because he believed LaMDA, an AI chatbot, has sentience. 

AI is not human-like, but it is our human tendency to believe it is so. One study has shown that since lockdown, people show a higher preference for anthropomorphised brands and platforms. The more we disconnect from each other, through tech, the more we want tech to resemble us. Men already have AI girlfriends and one Belgian man was “persuaded” to kill himself by an AI chatbot called Eliza after he shared his fears about climate change. Alarming though this is, is it any more so than a technological upgrade of last year’s sex dolls or emo music?

AI might make us stupid. As we rely even more on our phone our own capabilities may decrease. One study has shown that just having your phone nearby reduces cognitive abilities. As we outsource homework, research and even parts of our jobs, will we use our brains to create more wonders of the world, or to vegetate longer on TikTok?

Our biases make us vulnerable to the perceived threats of AI

Our biases make us vulnerable to the perceived threats of AI, but so do the times in which we find ourselves. We no longer seem to have sufficient collective belief in our special status as human beings. Another co-signatory of the open letter is the historian and author Yuval Noah Harari who has described humans as “hackable animals”. If you see humans as soulless organic algorithms then you might indeed feel threatened by AI which certainly constitutes superior algorithms unconstrained by mortal flesh. 

Harari believes that “humans will no longer be autonomous entities directed by the stories the narrating self invents. Instead they will be integral parts of a huge global network.” This is a far-reaching hypothesis, and perhaps why Harari does not own a smartphone, for all his apparent enthusiasm for a transhumanist chipped-brain future.

He has claimed that AI may even try to write the world’s next Bible. Humans are quite capable of starting religious wars on their own. So far all AI has managed is to show the Pope in a white puffer jacket. 

Harari’s dire warnings keep him in the spotlight as a forward-looking muse to the world’s elite. After all, describing AI as merely an intelligent system which, for now, can write a passable undergrad-level essay doesn’t seem epoch-defining. Equally, those calling for regulation potentially stand to benefit from investment, government contracts and control over the desired direction of regulation. 

Casting AI as a god is indicative of our tendency to fear the End of Days, combined with a crisis of confidence in ourselves and an overdeveloped “authority bias”. AI is no god, it is a fleet of angels, poised to swoop and intervene in the lives of humans at the bidding of the priest caste who direct it.

It is the priest caste we should look to. What do the tech leaders and politicians of the world want? They don’t want to stop AI altogether, of course. They want to pause development and the release of updates while they work together to “dramatically accelerate development of robust AI governance systems”. They want a seat at the table to write a new moral code. 

As a priority, they want the right sort of people — academics, politicians and tech leaders — to be doing this. Comparing AI to “nukes” rather than explaining its nudging capabilities is all you need to know about the transparency of the regulation, and the sort of safety it aims to achieve.

Whether AI is viewed as an intelligent assistant or angel, it is in the employ of humans.

Free Your Mind: The new world of manipulation and how to resist it written by Laura Dodsworth and Patrick Fagan is out now (Harper Collins) from all good book shops.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover