A staff member adjusts a robot developed by CloudMinds Technology Inc. during the 2020 World AI Conference at Shanghai EXPO Centre on July 9, 2020 in Shanghai. (Photo by Yang Jianzheng/VCG via Getty)

What does AI’s possibility make us think about ourselves?

The real moral danger of ‘artificial intelligence’ is not what you think it is

Artillery Row

“The artificial in artificial intelligence is very real” – John Lennox, Professor of Mathematics, University of Oxford.

Philip K. Dick’s novel Do Androids Dream of Electric Sheep? (later turned into the film Blade Runner), published in 1968, is a dystopian fable with a very modern resonance. Like all good science fiction, it is a brilliant work of philosophy- an extended “thought experiment”. The primary protagonist, Rick Deckard, is given a dispensation to “retire” (eliminate) an android species – a collection of humanoids which seem to be just like us but which in fact (and in ambiguous ways) are not. The dilemma he is forced to wrestle with is this: if these androids are that much like us, then why aren’t they people too? And, if they are, what right do I have to “retire” them?

The philosophical questions raised by the book (two of them anyway) are these: at what point does a machine -android in this case- become genuinely intelligent? And, assuming this singularity is the inevitable endpoint of AI research, what rights should such a machine enjoy?

If there is a moral problem that emerges from the attempt to build conscious machines it isn’t to do with them, it’s to do with us

And my answers are these: a machine cannot, in principle, ever be “intelligent” if by “intelligent” you mean “conscious that it is intelligent” (and if you’re not conscious that you are intelligent you aren’t that intelligent), and therefore their status as moral beings does not arise (at least not in any problematic sense). If there is a moral problem that emerges from the attempt to build conscious machines it isn’t to do with them, it’s to do with us. To insist that a machine can become conscious is to urge that we think of ourselves as machines. But the reasons why a machine can never be conscious are the same reasons that persons can never be properly thought of as machines.

Philip K. Dick’s androids are embodied computers. What is a computer? Any computer is a physical instance of a “Universal Turing Machine”: anything that can be interpreted as processing symbols according to a specified set of rules (an algorithm) counts as one. An abacus is a computer; a “supercomputer” is no more than an abacus on steroids. The chess computer “Deep Blue”, which “beat” (although it didn’t know it had) Garry Kasparov in a chess tournament was an efficient, but unintelligent, manipulator of symbols.

This is not begging the question against those who think that just because computers process those rules more efficiently, that something like “intelligence” must arise within them. For something to be a “rule” in the first place it is necessary that some intelligence can already interpret it as one. Rules do not determine their own application. If they did, you’d need another set of rules to clarify things. The logician Saul Kripke in his Wittgenstein on Rules and Private Language makes this point brilliantly. Even the rules of arithmetic require an interpretation. And all interpretation presupposes some intelligence -conscious intelligence- which does the heavy lifting of the interpreting. A computer doesn’t even count as a computer until someone -some conscious intelligence- decides to see it that way. Aristotle made the same point in slightly different terms: rules require, for their animation, that they become illuminated by the human intellect.

Hackles will now be rising amongst those who think I’ve misrepresented the “advances” made by the proponents of “strong AI” (those who argue that we can construct genuinely conscious machines). I’ve misunderstood, they will argue, the hyper-complexity of the current status of AI research. They will argue that since Turing we have “moved on”. 

So, let’s drill down a bit more. Surely the question of whether a machine can think is determined not by the machine, but by what “thought” is? And what thought is, presumably, doesn’t change that much over time?

The theory in the philosophy of mind which most conduces to the idea that a machine could think (and know it’s thinking) is called functionalism. This view holds that the human mind is a sort of tickertape, a set of mediating mechanisms between what happens to you in the form of sensory input and how you respond in terms of your subsequent behaviour.

But the human mind cannot be exhaustively described in those reductive terms. The rich phenomenology of consciousness is more than a set of instructions. The human soul does not implement some metaphysical algorithm. We are complicated in ways which do not reduce to any of that. We love, we feel envy, we anticipate, we feel generous, we engage with each other in many subtle ways, some of which are not detachable from our existence as embodied persons. How do you put into an algorithm the nature of gesture? How can a computer recapture that ineffable bit when your lover smiles at you? Or a few days later when, in a state of inexplicable anxiety, she says “it’s not you, it’s me”?

For those reasons, functionalism is an implausible theory of the mind unless you already think the mind is an algorithm, and you’re prepared to bracket out the most interesting parts of your mental life in service of that view. We are strange creatures who see things as though “through a glass darkly”. The essence of the human person is to be a centre of contradictions. The essence of the functionalist view of the mind, in contrast, is that no such contradictions “compute”. A machine can only be “conscious” if it is able to embrace contradictions as glitches. I’ve not seen that happen yet. When that happens, they tend to crash and burn.

The Amazon Alexa.(Photo by Aytac Unal/Anadolu Agency via Getty)

The real ethical question that arises as the AI machinery becomes ever-more inserted into our day to day lives is not whether “Alexa” can really think. “She” can’t. And never will. But she is part of a con trick on the part of those computer scientists who see no significant metaphysical difference between ourselves, and the machines that encroach, increasingly, onto our previous way of seeing things. They are trying to put us through the wrong shaped hole. There is a conceptual impossibility when it comes to trying to make machines think like persons, so the alternative has become to insist that we are not persons but machines.

There is something deeply different about the human person, as compared to the rest of the natural world. What does that difference amount to? The Christian might appeal to Genesis 1:26, and the idea that God made us in his own image and, therefore, it is a form of heresy to assume that we can pass that gift on to any other part of Creation. The Darwinian might say that it is the specific chemistry of the human brain which makes it special, a gift of millions of years of unguided evolution. These two views might even be reconcilable. 

The metaphysics is as it is, and we can’t change stuff to the extent of being able to overturn the natural order of things

And if not reconcilable then they nevertheless share a certain humility: that the metaphysics is as it is, and we can’t change stuff to the extent of being able to overturn the natural order of things.

The tragedy of Rick Deckard is that he gets this. He knows that the “replicants” he is licensed to kill cannot be real people. Otherwise he’d be a murderer. And yet their proximity to the normal rhythms of human life make him question whether he is one of them.

This is the real moral question when it comes to AI: not what its possibility makes us think about machines, but what it makes us think about ourselves.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover