Books

An intelligent book on AI? Very nearly

The threat from AI comes from humans placing too much faith in complex but fallible systems

This article is taken from the October 2024 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.


Over the last year, chatbots writing half-decent poems and a series of apocalyptic pronouncements regarding AI have plunged us into a civilisational moment. Two main concerns have emerged from a mixture of panic and hype fed by everyone from Rishi Sunak to Elon Musk. First, will it take my job? And second, will it really eventually wipe us all out?

Probably not, is the answer given by Cambridge’s inaugural DeepMind Professor of Machine Learning Neil D. Lawrence. The Atomic Human is his grand attempt not just to explain what AI is, but to use it as a means to better understand human intelligence. His mission is quasi-religious: “As machines slice away proportions of human capability,” he writes, we will come to be left with a “kernel of humanity”. It’s this atomic human that will reveal the truths about our human spirit.

What follows, however, may not necessarily stir up much excitement: human intelligence is defined by being “embodied”, severely limited by what it can communicate. We are “butterflies in a diving suit”, whose intelligence is “unwieldy but beautiful”. As a result, we construct “information topographies”, essentially a shared culture built on our vulnerabilities that allows us to overcome problems and work with each other.

Machines are good at reading parts of this complex framework and mimicking it. But not all of it: an intelligent machine removed from the specific context and the task given to it resembles little of the human mind. The underlying point is that we can’t isolate and recreate “intelligence” as an entity abstracted from the human body and its experiences.

Should then we even be calling it “artificial intelligence”? Lawrence seems almost to pose this question himself. Up until 2013, the technology was referred to as “machine intelligence”. Then came the discovery that computers could process information from images. In leapt Mark Zuckerberg and Google. But a rebranding was due: “Overnight, I became an expert in AI,” Lawrence drily remarks.

He wonders aloud whether the subject would court as much attention were conferences on AI to be renamed “Global Forum on Computers and Statistics for Humanity”. This enjoyable cynicism flares up throughout. “I think he really believed that his investment would, in time, buy him a smarter human,” he says of Zuckerberg’s purchasing a machine intelligence lab.

All this poses another question: has the onset of a new technology ever been accompanied by such a poor understanding of what it actually is? The book is at its best when it provides that rare thing, an accessible and interesting history of computer science.

The theoretical underpinnings of AI can be traced to roughly three enlightenment figures: Laplace, Leibniz and Newton. Fed with the vast amounts of data allowed by modern computing, their conceptual models for understanding the universe have provided us with algorithms that get increasingly better at mimicking human intelligence. Or to put it more simply: “The artificial intelligence we are peddling … simply combines very large datasets and computers.”

Nick Bostrom, titanic figure in AI

Lawrence’s finest moments come when these explanatory reductions are wielded against some of the more popular figures and narratives that have emerged around AI. Twenty-odd pages in, two rather large shots are fired in the direction of Nick Bostrom and Ray Kurzweil. These titanic figures have done much to define a popular trajectory of AI as leading to a “superintelligence” capable of handing over the civilisational keys to a computer. This is all “hooey”, writes Lawrence. The problem they make is “to conflate the intelligence of a decision with the notion of an intelligent entity”.

If the book’s subsequent sprawling narrative can be defined, it is to convey that intelligent entity in a variety of relatable scenarios. Discussion ranges from the experience of a locked-in patient writing a book to the military operations of WWII (you wouldn’t want a computer to make the decision about when to launch the D-Day landings, Lawrence assures us).

But too often the text is meandering and chaotic. Chapters are bursting at the seams with analogies, personal stories and historical curios, from William Blake to George Orwell and Jeff Bezos. The book is Lawrence’s attempt to download his brain onto the page, but this kaleidoscope of personal anecdotes and historical references all too often serves more to distract than to illuminate.

That fault I suspect lies as much with the publisher as with the author. It’s a present fad for non-fiction books explicitly to take the reader on a journey, the assumption being that unless we’re being given a nice bedtime story’ we’ll lose interest in the argument.

Boxed inside such a narrative however, the book is forced to make detours to some rather glib and unnecessary observations: “To resolve my boyhood confusions,” writes Lawrence on understanding his father and brother’s conflicting relationship, “I’ve had to borrow a page from Douglas Adams’ book.” Really?

This is a shame, because lurking across the 380 pages is a very important warning. Andrew Orlowski has defined AI as less a technological moment than a religious one. Lawrence appears to think in a similar vein: “When humans feel unable to pass judgement, they are tempted to pass the decision on to what they believe to be an omniscient entity.” As a quiet dissenter from within, you can’t help but feel that a more focused, engaging and even controversial book is lost.

One of the book’s core provocations is that we are heading for another Horizon scandal. The real threat from AI comes not from some Terminator-style robot bent on wiping out humanity, but from humans placing far too much faith in what are essentially complex but fallible systems. Many of the systems we are building are not even understood by their creators, argues Lawerence (though he here relies too heavily on Russian interference in the 2016 election via Facebook, the evidence for which is scant).

This argument is political as much as philosophical. There is a brief exhortation of Popper’s “open society”, with the usual warning against the hubris of “tech bros”. Sam Altman and OpenAI, he argues, are trying to replace the “great man of history” with the “great computer”. But behind this debate is a tension that seems to go unanswered: what is more dangerous, the technology itself or the potential of our unwavering belief in it?

If selectively read, this is a thoughtful and serious book. Paragraphs and sentences alone speak necessary truths about a world drowning in prediction and noise about AI. I have a feeling we’ll be hearing from the author again — if not in a shorter, more concise explanation of the technology and its limits, then perhaps as a witness on a government enquiry when the next AI-inspired Post Office scandal inevitably strikes.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover