Image by Yuichiro Chino
Artillery Row

AI and the disintegrating imagination

ChatGTP will change our relationship with writing

Around 5500 years ago, in Mesopotamia, humans began to write. From what we know, this was initially for purely functional reasons. The earliest Mesopotamian patron goddess of writing, Nisaba, was effectively a scribe, responsible for keeping records of things like food supplies, wages and taxes. Writing was little more than a set of symbols representing objects, actions and numbers — not yet capable of expressing full, syntactical sentences. The earliest accounts we have of the origins of writing — dating to the second millennium BC — treat its creation as of little significance, listing it merely as one craft among many others, including carpentry and leatherwork. 

Two millennia later came the alphabet — created first by the Phoenicians around 1100 BC and refined by the Greeks a few hundred years later. By using combinations of otherwise meaningless symbols to represent slices of phonetic sound, the alphabet transformed writing into an infinitely more versatile code — able to express, and preserve for further development, pretty much any idea humans could put into words. Around this time, we might reasonably argue, the Age of Writing began in earnest. 

Over the next three millennia, civilisations came and went, but writing — the most effective tool humans had ever devised for retaining and accumulating knowledge — persisted. Then, finally, sometime around 2025, our relationship with the written word changed. Humans suddenly discovered that they could task computers to do most of their writing for them — that, with just a few words typed into a prompt, sophisticated artificial intelligence could generate clear, coherent and sometimes even lively prose, which humans only needed to tweak to produce entire books, news articles, letters, legal documents and everything in between. The Age of Writing came to a close, and the Age of Prompting and Editing began. 

ChatGPT can already produce essays of a competent undergrad

If you think I’m overdoing things, go give OpenAI’s latest chatbot a spin. ChatGTP is the most sophisticated bit of “generative” artificial intelligence we’ve yet seen, “trained” to behave as though there were a smart and capable human being trapped inside your computer. You can ask it to summarise Hamlet. You can get it to write a passage about the end of the world in the style of P. G. Wodehouse. You can ask it, as I did, to generate an aphorism about ChatGTP in the manner of Theodor Adorno’s Minima Moralia: “In the age of ChatGPT, even our most intimate thoughts and expressions are subject to the cold logic of the machine.” Not bad. 

It still has its flaws, but ChatGPT can already produce essays of the standard of a competent undergrad, invent plausible characters and scripts, explain a complex subject in simple terms using vivid metaphors, write code, confect outrage and, er, brag about pumpkins. All from a prompt of just a few words. Where it doesn’t quite yet work, a human can simply edit and tweak the generated content in a matter of seconds. 

Clearly, this has huge implications for more or less any task involving words. It’s not so much that AI will end up taking every writing or teaching job on the planet (though it might well snaffle up a few). It’s that it’ll very quickly become common practice for humans to use AI to skip the vast majority of the writing (and research) process — allowing us, effectively, to stop having to think.

In the early stages of sketching this piece, I tried to come up with a few ideas for how schools might tackle the issue of AI-faked homework. I decided to ask the bot to write about the problem as if it were a teacher. Here’s how it responded:

As a teacher, it can be frustrating to spend hours grading piles of homework only to find out that some of it was completed with the help of a chatbot like ChatGPT. Let’s face it, it’s much easier to ask a chatbot to write a well-written essay than it is to actually put in the time and effort to do it yourself.

But fear not, fellow teachers! There are still plenty of ways to assign homework that can’t be easily faked with the help of artificial intelligence. For one thing, we can focus on assigning tasks that require critical thinking and problem-solving skills, rather than just regurgitating information.

One idea is to give students open-ended questions or scenarios and ask them to come up with creative solutions. For example, instead of asking students to write an essay about the causes of the American Revolution, we could ask them to come up with a plan to convince the colonists to revolt against the British. This type of assignment not only tests students’ knowledge, but also their ability to think outside the box and apply what they’ve learned in a real-world context.

Another option is to assign group projects that require collaboration and communication. These types of assignments can be especially effective because they not only test students’ knowledge, but also their ability to work well with others. Plus, it’s much harder for a chatbot to complete a group project than it is to write an essay on its own.

This response won’t win awards for literary flair. Nonetheless it’s quite easy to imagine editing a passage like that into something usable in a matter of seconds — with a ready-made example to boot. 

ChatGTP’s offerings are only the start — generative AI language models are popping up everywhere. As Katy Gero explains in WIRED

Most writing tools available today will do some drafting for you, either by continuing where you left off or responding to a more specific instruction. SudoWrite, a popular AI writing tool for novelists, does all of these, with options to “write” where you left off, “describe” a highlighted noun, or “brainstorm” ideas based on a situation you describe. Systems like Jasper.ai or Lex will complete your paragraph or draft copy based on instructions, and Laika is similar but more focused on fiction and drama.

It’s more or less inevitable, surely, that all of this will lead to the slow, self-perpetuating shrinking of the human imagination. Most people will choose 7/10 writing in exchange for one per cent effort over 10/10 writing for 100 per cent effort. Lower standards will become the norm. AI might never produce works of Dickensian brilliance, but then, under its influence, we might never again, either — and we won’t even really care anymore. Research skills will disappear, too, with answers readily available with a prompt of a few words. We’ll stop trying to wrestle original ideas from our own minds, instead getting AI to generate endless options from which we simply pick and mix. Our intellects, without proper exercise, will atrophy. Human culture will descend into a pernicious feedback loop: AI regurgitates for us whatever information it can scrape from the web, we then feed the edited results back into the internet, and the bots feast once again on what they’ve more or less just excreted. 

It’s easy to see all of this as an unexpected historical anomaly — artificial intelligence’s sudden, rude and potentially disastrous, irruption into human affairs. Generative AI is actually the perfectly natural endpoint of a cultural process that’s been going on for some time. Absurd as it might sound, ChatGPT has been in the making, in fact, for hundreds of years.

It all comes down to our culture’s evolving conception of knowledge. In pre-modern times, it was generally held that different kinds of knowledge existed in a clear hierarchy: first principles and general truths on top, with specialist information and skills beneath. Particular facts were only considered useful if they could be situated within, and made sense of, by a broader set of fundamental truths: what things are morally good and bad, what life’s ultimate meaning is, what beauty is and so on. As the intellectual historian Richard Weaver writes in Ideas Have Consequences (1948): 

In the Middle Ages, when there obtained a comparatively clear perception of reality, the possessor of highest learning was the philosophic doctor. He stood at the center of things because he had mastered principles. On a level far lower were those who had acquired only facts and skills. 

[ …]

It is an ancient belief, going back to classical antiquity, that specialization of any kind is illiberal in a freeman … The attitude is well expressed in King Philip’s famous taunt to his son Alexander, who had learned to perform skilfully upon the flute: “Are you not ashamed, son, to play so well?” It is contained in the hierarchy of knowledge in Aristotle’s Metaphysics. It is explained by Plutarch with the observation that “he who busies himself with mean occupations produces in the very pains he takes about things of little use evidence against himself of his negligence and indisposition to what is really good.” 

Towards the end of the Middle Ages, this hierarchy began to reverse, with particular facts rising above abstract truths in importance. Weaver traces this shift back, in the first instance, to the arguments of the 14th century friar William of Ockham. 

Ockham propounded a view of reality known as nominalism, which argues essentially that universals (abstract concepts that unite two or more ostensibly similar objects in the physical world) don’t actually exist. According to the pre-modern — and, as Ockham saw it, outmoded — view, what unifies groups of similar objects (that is, the thing that makes every magpie a magpie, every lake a lake, or every lie a lie) is a kind of metaphysical stamp or identification number. If you could somehow peel away the corporeal form of a thing and get to its true, immaterial essence, you might find there a tag that says, for instance, “Chair — Form #72945”. Every worldly object gets its basic form from one of a limited set of universal stencils. My eyes might be blue and yours brown, but we are printed, ultimately, from the same fundamental template. 

The search engine acts as a palatial annex to the global hive mind

Ockham argued that this is all mistaken. Every individual thing in the universe, he believed, is entirely discrete. Two “birds” or two “chairs” are not really two of the same thing. It is only we humans who group things — dogs, trees, essays about chatbots — and then project fictional categories onto the outside world.

This, Weaver argues, dramatically weakened the conventional assumption that there are transcendental frameworks of knowledge that go beyond (and help us to make sense of) what we observe in the physical world. No, Ockham seemed to suggest: there’s no abstract, rational way of understanding reality other than to study it in its infinite particularity. So came the rise of Baconian empiricism — the idea that the only way we could build up a complete understanding of reality was to compile a comprehensive catalogue of all available facts. This chemical does this when you combine it with that particular compound. This organism functions this way under these conditions. Weaver again: 

The whole tendency of modern thought, one might say its whole moral impulse, is to keep the individual busy with endless induction. Since the time of Bacon the world has been running away from, rather than toward, first principles, so that, on the verbal level, we see “fact” substituted for “truth,” and on the philosophic level, we witness attack upon abstract ideas and speculative inquiry. The unexpressed assumption of empiricism is that experience will tell us what we are experiencing.

Specialism, once considered crude and uncultured, became increasingly vital. If a complete picture of the world involved amassing all possible particular information, then we needed to understand those particulars in as much detail as possible. Obviously, as all this information accumulates, it becomes quickly apparent that no single human can keep track of it all. Even if you were to apportion all of this “knowledge” evenly among the minds of the entire world population, we still wouldn’t be able to hold it all. So we looked increasingly for external ways of “storing” knowledge. One obvious consequence was the emergence of the encyclopaedia (a catalogue of specialist facts, of “information”, simply ordered alphabetically, rather than unfolding according to some narrative that makes sense of these facts in a hierarchy).

Still the vast human reservoir of knowledge kept — and keeps — growing. According to complexity scientist Samuel Arbesman, the total amount of knowledge in the world of medicine doubles every 87 years. In the world of chemistry it doubles every 35 years, and in the world of genetics every 32. How do we keep track of it all? 

In 1945, three years before Weaver wrote Ideas Have Consequences, an American engineer called Vannevar Bush proposed a solution. In the Atlantic, he wrote about a hypothetical machine, which he coined the “memex”, that would act as a “sort of mechanized private file and library”. “A memex”, he wrote, “is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.” 

A version of Bush’s invention, the search engine, eventually became a reality. It functions today not so much as a “private file and library” — a tool to help us keep track of information we already know. Rather, it acts like an “external hard drive”: a palatial annex to the global hive mind that has, according to psychological studies, quite literally started replacing our own memories. At this point, personally remembering and keeping track of knowledge has become largely irrelevant — what matters is having instantaneous access to facts. 

ChatGPT simply represents the next logical stage in the process. All knowledge is available, on demand, with a simple prompt. Already, somebody’s figured out how to hook it up to WhatsApp: everything millions of humans have ever discovered about reality is available as easily as messaging a mate. Perhaps the only possible step further would be a chip in the brain that, when we think of something, uses artificial intelligence to gather the relevant facts from the internet and deposit them immediately into our thoughts. 

The problem is that all of this is just disparate fragments of knowledge. We have no particular way to organise it — no hierarchies, no ultimate truths. That’s something generative AI won’t ever be able to provide: if there are abstract frameworks separate from the physical world, dictating morality, aesthetics and truth, no algorithm will ever suss them out by trawling all information on the internet. Instead, we get alarming attempts to package information by algorithm, like the app Consensus, which uses AI to trawl academic papers and provide the “correct” answers to any questions posed. (“We wanted to automate the process of reading through papers and pulling out conclusions,” its co-founder Christian Salem has explained. What could possibly go wrong?) 

Perhaps before long, the one job us humans will still have left will be to organise the mass of information and entertainment thrown at us by AI into some kind of hierarchy ourselves — to trawl it for permanent, transcendent truths about good and bad, beauty and meaning. Do we really trust that, having been absolved of any responsibility to learn and remember things on our own, our minds will really be up to that task? I wouldn’t bet on it.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s most civilised magazine for £10

Subscribe
Critic magazine cover