Books

The limits of logic

We are at a crucial moment in the history of artificial intelligence

This article is taken from the February 2022 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issue for just £10.


After decades of false starts, artificial intelligence is finally flourishing. Algorithms are emulating human writing, art and music with unnerving accuracy. Deep Mind’s AlphaGo Zero “taught itself” to play Go, and beat the reigning world champion 100 games to 0. Machine learning helped scientists to predict DNA sequence patterns that would have taken humans on their own hundreds of years to crack. If enthusiasts like Ray Kurzweil and Elon Musk are to be believed, the goal of artificial “general” intelligence — machines as smart as humans — is suddenly within sight.

Erik J. Larson, a computer scientist and AI entrepreneur, disagrees. In his timely, if often technical, book The Myth of Artificial Intelligence: Why Computers Can’t Think The Way We Do, he argues that growing optimism about the prospect of human-like artificial intelligence is based on a fundamentally flawed conception of what human intelligence actually is.

The book centres on some fairly meaty discussions of three types of reasoning: deductive, inductive, and abductive. Deductive reasoning is essentially sequential logic: all men are mortal, I am a man, therefore I am mortal. Inductive reasoning works in the opposite direction, deriving generalisations from observed data: every squirrel I’ve ever met has been smaller than me, therefore I conclude with a high degree of confidence that all squirrels are smaller than me. Abductive reasoning is the odd one out: defined variously as guesswork, speculation and intuition, it is the much more mysterious, but ultimately more important, form of inference that guides our everyday “common sense”.

The Myth of Artificial Intelligence, Erik J. Larson (Harvard University Press, £23.95)

According to Larson, AI has thus far been developed entirely by mimicking the first two forms of reasoning — unsurprisingly so, since both can be expressed simply enough in algorithmic form. But nobody, he writes, yet “has the slightest clue” how to begin programming abductive reasoning, “and, surprise, no one is working on it — at all”.

The blind hope is simply that by weaving together a large enough network of deductive and inductive algorithms, something approximating abductive reasoning — with its wild leaps, creative insights, speculative guesswork, and contemplative reflection — will magically emerge. But “this,” Larson writes, “is a profound mistake”.

The problem stems in part from the underlying belief among many AI enthusiasts that everything complex in the universe is ultimately just an elaborate configuration of otherwise simple, atomistic building blocks. This seems obviously true of the physical world — as science has acknowledged since at least the Ancient Greeks. But there’s little evidence it can account for immaterial phenomena like the mind.

Sure, you can break down a complex thought (“it’s raining, so I’ll take my umbrella”) into a handful of smaller constituent thoughts (“it’s raining”, “rain makes things wet”, “I don’t want to get wet”, “umbrellas prevent things from getting wet”, “I have an umbrella”, “I’ll take it”). But we have no idea what it would mean to go much further — how you would break up into atomistic “micro-thoughts”, for instance, something like the concept of wetness.

Even what appear to be discrete acts of logical thought only make sense within the context of a highly sophisticated conceptual framework. To understand the phrase “two plus two equals four” depends on us already having a concept of numbers, a concept of addition, a concept that statements can be right and wrong, and so on. We need, in other words, both an understanding of “prior knowledge” (not just raw data) and the ability to “hypothesise” from “a background of effectively infinite possibilities”. And we still have no idea, Larson writes, what either would look like in fully mechanised form.

But then the mistake many AI enthusiasts make is to believe that if a machine simply behaves as though it’s thinking, it really is thinking. Or more accurately: to believe that if a machine behaves as though it’s thinking, that’s all that matters — that all of the subjective phenomena we associate with thinking are, in the end, irrelevant illusions.

The artificial general intelligence fantasy Larson is flogging is well and truly dead

But this misses something fundamental about the nature of truth. Consider the difference between a machine “proving” something is mathematically correct, and a human recognising that it’s true. The machine can churn out equations all it likes, but it can’t actually prove that something’s true unless there’s a conscious being, capable of interpreting the data, to whom it’s doing the proving. And again, the last 20 years have brought us not a single step closer to knowing how to programme something like “truth recognition” in a machine.

In the second half of the book, Larson moves on to a lively discussion of the (likely insurmountable) problem of machines “understanding” natural languages — that is, their being able to have normal, everyday conversations. Larson explains, often to great comic effect, how metaphor, sarcasm, and double meanings (“the box is in the pen”) trip up even the most sophisticated of machine learning algorithms — and explains, yet again, that nobody knows how to solve the problem, even in theory.

By this point, the artificial general intelligence fantasy Larson is flogging is well and truly dead. Nonetheless, there are a couple of points about language he might have expanded upon. Language arises, as far as we know, only in the minds of highly sophisticated biological beings, whose cells, organs, nervous systems and brains are working together all day long in staggeringly “intelligent” (if unthinking) ways. Language is not, therefore, some perfectly pure code existing “out there” in the universe, but a system that has developed erratically and gradually from the messy experience of actually living.

I believe that human intelligence will ultimately be shown not to be wholly algorithmic

This has two implications for the hardline AI advocate. First, it means that taking language as the fundamental basis for intelligent thought is to start at the wrong point — words might be the “simplest” cells of thought, but they’re not where thought begins: our murkier, pre-linguistic experiences come first. Second, it means human language obviously cannot capture reality fully as it really is, and therefore likely isn’t up to the task of programming something as sophisticated as a simulation of the human brain.

For instance, if scientists are correct that human language can only roughly approximate what’s actually going on at the level of quantum physics, then a simplified linguistic code that attempted to replicate the quantum processes going on inside our brains would always fall short. But we know of no way of “coding” that doesn’t rely on language — and we’re forced, therefore, to use one very small part of our own intelligence to try to recreate the whole thing.

Larson’s book is, in the end, a stimulating and often pretty damning assessment of the current state of artificial intelligence — one that anybody worried about the prospect of superintelligent machines should read.

Like Larson, I believe that human intelligence will ultimately be shown not to be wholly algorithmic. But either way, the next decade or so will prove a fascinating moment in history — a long-awaited opportunity, thanks to AI, to test once and for all the theory that the human mind is nothing but a spongy computer.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover