Picture credit: Eyal Warshavsky/SOPA Images/LightRocket via Getty Images
Artillery Row Books

The fables of Davos Man

Yuval Noah Harari has written another long book with little wisdom

When asked what was wrong with the world, G.K. Chesterton replied, simply, “I am.” His answer was impressive in its concision. Chesterton, of course, was alluding to the idea of original sin, to an essential warping of one’s own being, which one needs to confess and confront. The idea is religious in its roots, but accessible beyond those parameters: some agnostic wit once said that original sin was “the only empirically verifiable aspect of Christianity.” Yet, in his new book, Nexus, Yuval Noah Harari manages to completely avoid the implications of Chesterton’s response and dismisses most religions, which blame the world’s problems on the innate dispositions of human beings. According to Harari, what is wrong with the world is actually “a network problem.”

Harari is an Israeli professor of mediaeval history, who experienced astronomic success as a public intellectual following the publication of his epoch-spanning history of humanity, Sapiens, which won praise from Barack Obama, among others. He has become a fixture at Davos, a prized favourite of the World Economic Forum. Accordingly, his world-view is very much in tune with liberal consensus: he is deeply concerned about the threat of misinformation and artificial intelligence. But does he make the case? 

Nexus is a history of information systems. To its credit, it is unusually wide in ambition and scope, covering the history of information and communication from the Stone Age through the invention of the printing press, the scientific revolution, the totalitarian regimes of the 20th Century, and onto our AI-dominated future. Along the way, the reader is treated to anecdotes and facts that are, at times, entertaining and intriguing, although the narrative often retreads old, obvious territory, familiar to anyone who knows anything about history. 

Harari … sees no purpose or truth in religious narratives other than their function as a tool to connect people and impose order

Yet Harari’s vast ambition is undermined by the book’s persistently faulty analysis. This stems from his ideological priors. He is an old-school materialist of the “love is just chemicals in your brain” school of thought, akin to New Atheist spokesmen like Richard Dawkins and Daniel Dennett. Harari consequently sees no purpose or truth in religious narratives other than their function as a tool to connect people and impose order. Mythologies, in his view, were a device that helped bind humans into greater wholes, making them more effective at collective projects like war. Like the New Atheists, Harari does not seem to see religious stories as containing poetic truth, wisdom, Jungian archetypes, or anything like that. They’re just made-up — clever ruses to control behaviour. 

Harari’s disdain for religion extends to religion’s analysis of human psychology. He calls the idea that “our flawed individual psychology makes us abuse power” a “crude analysis.”  He argues instead that human power is “never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans.” This seems at best a partial truth, at worst a distortion. Of course, a person can’t rise to power without circumstances and other people collaborating. But the idea that “flawed individual psychology” and “individual initiative” are unimportant seems wrong. It would let Hitler and Stalin off the hook, for instance, while hanging all responsibility on badly constructed information networks. But who constructed those networks? This is the question Nexus maddeningly evades over and over again.

In surveying the past, Harari argues that previous human crises were instigated by new information technologies and their improper application. Specifically, he uses the example of the printing press. He compares the way that witch hunts spread through early modern Europe, due to the mass publication of the infamous Malleus Maleficarum (The Hammer against Witches), with the way conspiracy theories like QAnon have used modern information technologies like social media to flourish. Harari dismisses the idea that truth will ultimately win out through a free exchange of ideas as “naïve.” He says, instead, that there need to be better “self-correcting mechanisms” in our information networks, although he forebears to say exactly what they should be. Would he support the recent judicial decision that banned X/Twitter throughout Brazil, for example? I imagine that would be, for him, a bridge too far, but by failing to specify the methods he advocates, he leaves himself open to the charge of supporting mass censorship. If all Harari meant was that social media algorithms should not pump outrage bait simply to milk engagement, that would be at least comprehensible. But his ambiguity on this point seems strategic. 

Harari argues that scientific institutions have such “self-correcting mechanisms,” which religious institutions lack (ignoring, for instance, that much of the resistance to witch hunts was ecclesiastical rather than secular). Specifically, he argues that peer review is such a mechanism. Even when scientific orthodoxy stifles new discoveries, Harari contends, the peer-review process usually allows the orthodoxy to be corrected within a few decades. 

This is only convincing to a very limited extent. I was waiting for Harari to address some of the major counter-arguments to this point, but he never does. He admits that scientific institutions can be corrupted or enthralled to orthodoxy and gives some minor examples, but, to my mind, he fails to address some of the major scientific failings of recent times. For instance, consider all the peer-reviewed studies that said that Oxycontin was safe for patients, sparking the opioid epidemic in the United States. This has resulted in an estimated 560,000 deaths to date. Consider the COVID-19 pandemic as well. It seems probable that it began in a Wuhan laboratory, yet the scientific establishment has frantically resisted this idea. The very “self-correcting mechanisms” that Harari advocates were used to block anyone from posting about the lab-leak theory on Facebook. Again, none of this is addressed in the book. Neither is the replication crisis currently afflicting the sciences. 

The ancient idea that human beings are fatally flawed — what we would call “the tragic sense of life” — is actually necessary to avoid tragedy. It puts you in contact with reality. Harari, despite his detailed descriptions of witch hunts and Stalinist terror, lacks this tragic sense. Human beings, in his view, are capable of simply correcting the “network problem” posed by their new information systems, with no need to address or confront their own hidden darkness. Even though he is extremely concerned about AI, he implicitly sees human beings as already being machines, which simply need better programming. Religious narratives are just bad, outdated programs. They merely record the “biological dramas” of interpersonal struggles and familial conflicts — like Rama’s exile by a treacherous step-mother in The Ramayana. According to Harari, these dramas have no applicability to dealing with the issues posed by a rationally organised bureaucracy or the threat posed by runaway AI. Yet, Harari can only reach this point by refusing to see these systems as what they really are — malformed extensions of the human. He thinks they are somehow radically other, even though they were created by human beings in order to execute human-derived motives. 

Indeed, there are serious threats posed by A.I. and by excessively computational societies, more generally. Harari cites the compelling example of mass surveillance and facial recognition in Iran, which is currently being used to identify women who aren’t wearing hijabs while driving their cars, resulting in their cars being impounded. China’s experiments with the social credit system are equally alarming. Yet, Harari cannot be an effective guide to these problems, because he sees them solely as technical problems, rather than moral problems, and so sees technical expertise as the necessary solution. But what if technical expertise is, in certain cases, the very disease that it seeks to cure? 

By blaming our technology and our networks, Harari shifts the blame away from us — the very people who designed that technology and created those networks. For instance, he argues that the Facebook algorithms that promoted genocidal incitement in Myanmar by stoking engagement at all costs were agents in their own right. But the greed for engagement, for maximising likes and clicks, is clearly a human phenomenon, one which leads companies to create algorithms that drive the level of discourse abysmally downwards. Similarly, Harari contends “AI isn’t a tool — it’s an agent.” Well, it may appear to act like an agent, but its only agency is that which we have put into it. We programmed it, we authored its training data. 

Harari can’t see that this might be true. He rightly disparages seeking power rather than wisdom, but he has no clear idea about what wisdom is or where it is to be found — other than in the self-correcting mechanism of peer-review, apparently. But, since you can never “derive an ‘ought’ from an ‘is’,” in David Hume’s words, science can never function as a moral guide on its own. It can describe what physical reality is but can’t properly determine our moral relationship to it. We need other sources for wisdom.  

if he ever asked himself whether human beings are more than just biological machines, he would have to restart his entire intellectual project from scratch

Of course, if you see people as being, fundamentally, machines, then there really are no moral problems. This is the quandary Harari finds himself in. He knows that AI totalitarianism would be bad, but ultimately his defence of the human is no match for it because his sense of the human is so thoroughly demystified and disenchanted. He never seems to ask whether this thoroughgoing demystification is itself another narrative and if it perhaps leaves out key features of human experience. But if he ever asked himself whether human beings are more than just biological machines, he would have to restart his entire intellectual project from scratch. 

If you buy Harari’s book for a sense of how to solve the significant problems facing the world, you will be severely misled and disappointed. However, if you want a window into the mindset of elite opinion — into the kinds of books that Bill Gates listens to on the elliptical — I can recommend this uniquely frustrating read.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover