AI is far older than we think

When it comes to artificial intelligence, we’re scared of the wrong things

Artillery Row

With the advent of realistic chat-bots, voice simulators and art at the push of a button, it’s understandable that fears around AI have surfaced. These breakthroughs represent a tipping point into a new era of automation and virtual fakery, with worrying implications for everything from work to warfare. AI is badly misunderstood and misrepresented, however, especially in many instances by those working in the tech sector.

The main problem is definitional. Our fears of AI, expressed in sci-fi films like Terminator, I, Robot and The Matrix, is of the creation of a new human-like intelligence, an electronic mind capable of hostility and malice. This atavistic terror is rooted in our deepest fears and instincts, going back to stories of ghosts, demons, monsters, evil spirits and vampires. On-screen confrontations vary between presenting the artificial intelligence as evil, misunderstood, or hostile but capable of reconciliation. This whole paradigm is deeply misleading — because it is not clear if artificial intelligence in this sense will ever come into existence, or is even possible in the first place. 

Why should a computer, no matter how sophisticated, be capable of thought? We know, after all, what sort of beings have thoughts — organic ones, with brains and nervous systems. Despite clumsy analogies between the operations of the brain and machines, there is a vast qualitative difference between a processor and a brain. 

AI is less intelligent, and less conscious, than a bacteria or a virus

Take something as simple as learning to catch. You could program a computer to measure and calculate the speed, trajectory, mass and so on of a ball, and catch it with a robotic arm. When you chuck a tennis ball in the direction of an eight year old, though, they aren’t doing calculus in their head. In fact, in the realm of elite sports, there have been all sorts of attempts to understand why balls thrown through the air seem to behave certain ways in different circumstances — it’s taken decades of still ongoing research to understand things like aerodynamics. Long before scientists got involved, professional sportsmen have been intuitively using and responding to things the scientists would only make sense of years later. 

Why are many in the world of tech, and in the media, convinced that we could create a human-like intelligence by inventing a sufficiently sophisticated computer or software programme? The answer is that many in these areas are hardline materialists. The most extreme proponents of this philosophy, like philosopher Daniel Dennett, reject the reality of human consciousness altogether. For them, our experience of thinking is actually a sort of epiphenomenon, and human beings are simple confluences of physical forces that have randomly evolved. If consciousness is an illusion, then the simulation of intelligence is intelligence. 

Ironically, this apparently unfalsifiable proposition looks to be pretty well-falsified by the advent of so-called AI. No matter how remarkable the things you can get AI to do (and the tech is impressive), it is not an organism. It lacks senses, desires, a sense of self — consciousness. It is not alive, and it is not intelligent. It is less intelligent and less conscious, in fact, than a bacteria or a virus.

Much of what we talk about in relation to the moral dangers of AI relate to something else entirely — what you might call Synthetic Intelligence (SI). A synthetic intelligence would most likely be biological, the result perhaps of the alteration of existing beings. the word “robot” itself was originally employed to describe flesh and blood artificial beings, in the Czech play “Rossum’s Universal Robots”. To this we might add Frankenstein’s monster, the monkeys in Planet of the Apes, and the benighted inhabitants of the Island of Doctor Moreau as examples of Synthetic Intelligence. Though still in the realm of science fiction, these are forms of artificially created mind we might actually be capable of bringing into existence. We should be much more frightened of the implications of biological science than of computing when it comes to the possibility of creating minds. 

Anonymous woodblock engraver, Greene Bacon and Bungay

If we accept this, what is the thing we currently call AI? How can something thoughtless appear intelligent? Should we still be worried about it? Much of our confusion is banished when we start thinking of AI as a very complex and ongoing effect of human intelligence. There is intelligence there, but it is human intelligence that we are experiencing, even at a previously unimagined remove in time and space. Once we understand this, we also grasp something very important about it. AI is not a new technology, but a very old one. Consider for example the words of 4th century theologian St. Gregory of Nyssa:

Such effects, for instance, as we often see produced by the mechanists, in whose hands matter, combined according to the rules of Art, thereby imitates Nature, exhibiting resemblance not in figure alone but even in motion, so that when the piece of mechanism sounds in its resonant part it mimics a human voice, without, however, our being able to perceive anywhere any mental force working out the particular figure, character, sound, and movement.

Gregory is talking about what we might call AI. He’s also alive to the Dennett-style interpretation of it: 

Suppose, I say, we were to affirm that all this was produced as well in the organic machine of our natural bodies, without any intermixture of a special thinking substance, but owing simply to an inherent motive power of the elements within us accomplishing by itself these operations — to nothing else, in fact, but an impulsive movement working for the cognition of the object before us; would not then the fact stand proved of the absolute nonexistence of that intellectual and impalpable Being, the soul, which you talk of?

For Gregory, who lived long before the mechanistic fallacies that now dominate modern science and philosophy, the machine analogy explodes rather than confirms the materialistic interpretation of human intelligence:

Because, you see, so to understand, manipulate, and dispose the soulless matter, that the art which is stored away in such mechanisms becomes almost like a soul to this material, in all the various ways in which it mocks movement, and figure, and voice, and so on, may be turned into a proof of there being something in man whereby he shows an innate fitness to think out within himself, through the contemplative and inventive faculties, such thoughts, and having prepared such mechanisms in theory, to put them into practice by manual skill, and exhibit in matter the product of his mind.

Gregory fully punctures the absurdity of the comparison when he points out, “if it were possible to ascribe such wonders, as the theory of our opponents does, to the actual constitution of the elements, we should have these mechanisms building themselves spontaneously; the bronze would not wait for the artist, to be made into the likeness of a man, but would become such by an innate force.” 

Joseph Racknitz, Humboldt University Library

AI is only a more complex and interactive form of our very oldest technologies of symbol-making. When a person leaves an image or a symbol carved into a rock, that surface will now perpetually communicate its message to any future observer. Long after its creator is dead, the same idea, image, thought will live on again in the minds of later observers. When we read a book today, the text speaks to us. To read Plato or Dickens is to encounter an intelligence by means of artifice, but that intelligence is human. The more sophisticated the artifice, the more (not less) it bears the mark of human intent and intellect. 

This does not make AI safe — quite the opposite. The very thing that makes AI so dangerous is its lack of consciousness, even as it becomes capable of ever complex and human-like actions. This opens up three main theatres of existential risk and moral hazard, which have always existed but are now expanding to new extremes. 

The first risk is in the field of warfare and security — accelerating all the most disturbing trends of coercive power in the modern world. From the invention of artillery to the war by joystick of remote-control drones, the battlefield has been fast evolving towards the minimisation of human judgement and agency. Modern AI opens the door to fully automated weapons systems. Dictatorial regimes now have access to soldiers incapable of disobeying orders, firepower that can be unleashed without the hesitations of human conscience or mercy. 

Photo by Wolfgang Schwan/Anadolu Agency via Getty Images

No less dangerous are the applications for AI to be employed in policing and surveillance. The greatest limiting factor on mass surveillance so far has been the labour required to comb through vast data sets. The internet has theoretically made available vast amounts of personal data, but the very scale of information available means that the ability to actually surveil or monitor populations is subject to inherent limits. In Communist Romania there was an agent or informer for every 43 citizens — in East Germany there was one for every six. Organisations like GCHQ and the NSA have long relied on forms of automation such as using software to flag up conversations with particular keywords. With increasingly sophisticated AI, that process could in theory be vastly more efficient, making true, panopticon-style mass surveillance practical for the first time. 

The second existential threat is economic. It’s not only graphic designers and call-centre workers who risk being phased out, but many, perhaps even most, white collar workers are now at least theoretically vulnerable to automation. The utopian hopes that automation will release us to a life of leisure and material abundance have not been borne out thus far — and the “jobs of the future” may be even more menial in the context of AI automation. 

The third and most subtle danger, which is already being realised in many trivial instances, is that of being drawn into a still more all-consuming world of digital deception and abstraction than the one we already inhabit. The already ongoing process of film and television being consumed by franchises, repeats, rehashes and nostalgic cultural “quotation” could now become truly and wholly synthetic when images, voices and no doubt one day soon entire videos can be generated at the press of a button. 

With AI likely to be integrated into ever more software and hardware, the passivity of consumption and work is likely to reach a previously unimaginable extreme. What need is there for an educated workforce at all, if most machines practically run themselvesnot only instantly and correctly responding to commands, but increasingly acting to anticipate and manipulate our desires? The distinctions between work, leisure and consumption will be still further blurred, with every moment of our lives potentially monitored and monetised by omnipresent AIs. What happens when Alexa really becomes the virtual butler it was always promised to be? 

To anyone who places their faith in law and politics to intervene, it is here perhaps most immediately and violently that AI will cast prior institutions into the void. With digital fakes of writing styles, voices and photographs already readily available, and becoming ever more convincing, “fake news” is about to take on a whole new meaning. Even the most sharp-eyed journalists could be taken in — assuming journalists are even still part of the process. 

The sheer volume of fake stories, videos and audio clips potentially creatable (and available to millions of highly politicised social media users) will be a temptation that most don’t bother to resist. Rival nations will barely need to even bother employing people to generate propaganda and misinformation — there are more than enough radicalised people online to do it all by themselves. 

AI is not new. It is the ancient ghost in the machine, the magician’s apprentice swept away by his own broom, the golem tearing down the walls of Prague. Humanity being enslaved by its own desires and malevolent impulses, embodied by the tools and technology we pridefully wielded, is one of the oldest and most rational fears we have. With the advent of AI, we are seeing the triumph of the virtual and the rise of an unreal society in which our humanity is in peril. Do not mistake the danger, however — it is our own desires that will destroy us, our own hand acting through the machine. 

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover