A license to cheat

The abuse of artificial intelligence systems threatens the integrity of our education system

This article is taken from the June 2023 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.

The blockbuster hit movie Independence Day contains a brief scene parodying UFO cults. In the movie, ecstatic members gather on the top of tall buildings in Los Angeles and other major cities to welcome the extra-terrestrial invaders. They are promptly zapped, along with everyone else within firing range of the powerful laser beams. When I saw the movie in the West End on release, a huge cheer went up. 

The most notorious of these cults was Heaven’s Gate, which gained notoriety in 1997, with the ritual suicide of 39 members in California. They were convinced that the arrival of the Hale-Bopp comet allowed them to escape the “Human Evolutionary Level”. Today we can hear echoes of this rhetoric among techno-utopians. They also share an eschatological vision, in which an ecstatic future awaits. They range from wealthy transhumanists, such as Elon Musk’s former partner, the popstar Grimes, who advised us in song that “Biology is superficial/Intelligence is artificial” to Google’s spiritual mentor Ray Kurzweil. Their presence is significant, for it complicates what should be a simple story of a new technology posing a challenge to education.

Last year OpenAI, a commercial research company originally founded in 2015 as a non-profit organisation, released a chatbot called ChatGPT. The San Francisco operation had a history of expensive AI stunts, such as a computer game bot that beat humans (it cheated), and a mechanical hand that rearranged a Rubik’s Cube (it dropped the cube much of the time). But with ChatGPT, it hit the jackpot

ChatGPT had ingested everything it could find on the internet, and it had been tuned to regurgitate a remarkable facsimile to order. It churned out ersatz replicas of material ranging from politicians’ speeches to medical reference books, from poems to computer source code. And it did so with great confidence. This was a landmark in what is called generative artificial intelligence. 

ChatGPT does not “know” what it is generating, and cannot fathom even basic concepts. It simply finds the best statistical match, rather like a word processor’s autocomplete feature, which knows that “Yours” appearing near the end of a document is frequently followed by “sincerely” or “faithfully”. Technology giants keen to gain an edge over their rivals rushed in. Microsoft has made a $1 billion investment in OpenAI and is adding it to a wide range of its products. 

ChatGPT’s limitations quickly became apparent: it generated spurious information. Invited to create academic papers, it simply imagined the sources it needed — generating fake, but still plausible sounding, citations. Yoked to Microsoft’s search engine Bing, ChatGPT advised us that the fastest bird was a dolphin, that the United States had had four female Presidents, and that it was safe to eat glass. 

Amongst researchers, this tendency is called “non alignment”, or more anthromorphically, “hallucination”. When Google rushed in to show that it too could emulate ChatGPT, with a clone called Google Bart, the answers it spat out were so poor the markets knocked over $100 billion off parent company Alphabet’s capitalisation. Bart was nicknamed “Barf”. 

The widespread use of artificial text generators clearly poses an immediate threat to the integrity of the education system. Students can use the service to write essays, using only a few simple words as a prompt. They can pretend to be more capable than they are, with a minimum of effort — performing what in adult life would be regarded as a fraud. Teachers can use the service to generate classroom material, and write student reports, neither of which may be accurate. ChatGPT therefore represents a straightforward challenge for institutions: would they compromise their integrity by allowing students to generate homework and assessment material using AI? 

The New York schools district was amongst the first to ban ChatGPT. Oxford and Cambridge universities followed, prohibiting its use in coursework and exams due to plagiarism fears. Sciences Po, the elite French institution which has educated five Presidents including Emmanuel Macron, has banned generative AI. 

Po cited “fraud in general”, and anyone breaking the rule would receive a lifetime exclusion from any French higher education institution.The use of ChatGPT by students is difficult to police, such is the faithfulness and plausibility of the text it can generate. Several higher education institutions have reacted by announcing a shift to more carefully invigilated exams, and away from unsupervised essay coursework.

“Doing any of those assessments in uncontrolled conditions just puts you in a situation where you don’t know if the work has been completed by the student or not. So that’s why I think you have to have exams,” Daisy Christodoulou, former head of assessment at Ark Schools, told the House of Commons Science and Technology Committee recently. “I think that ChatGPT has huge implications for continuous assessment course work. It is very hard to see how that continues.” Charles Knight, of the consultancy Advance HE, agreed that a move to more invigilated examinations was inevitable.

Earlier this year the Joint Council for Qualifications (JCQ), the membership body for the eight largest qualifications boards, issued guidelines for the use of AI in assessments. Bluntly titled “AI Use in Assessments: Protecting the Integrity of Qualifications”, the advice was emphatic: “Students must make sure that work submitted for assessment is demonstrably their own”.

Those who use AI “will have committed malpractice, in accordance with JCQ regulations, and may attract severe sanctions”. Students tempted to take AI shortcuts “must understand that this will not allow them to demonstrate that they have independently met the marking criteria and therefore will not be rewarded”.

The office of qualifications and Examinations Regulation (Ofqual), which regulates exams and assessments in England and Wales, agreed. “Work submitted to secure qualifications must be a student’s own. Ofqual’s rules require exam boards to ensure that grades accurately reflect what students know and can do. Cheating is unacceptable. Students who cheat face serious sanctions, including being disqualified from getting a qualification,” it told me. Exam board AQA echoed the line. 

Yet many voices across education seek to embrace artificial intelligence

Yet many voices across education seek to embrace artificial intelligence, despite the obvious risks, of a loss of confidence amongst parents and students, and broader reputational damage. Surprisingly, that includes International Baccalaureate, which wants you to know it will not be abandoning coursework — nor will it be telling students they can’t use tools such as ChatGPT. Their output must be cited, as with any other source.

Across the pedagogical class, AI is regarded much more positively. At The Open University, which reminds us it is “an institution that has social justice at its center” (sic), the Knowledge Media Institute recently boasted on Twitter that “While many universities are banning ChatGPT, we in @OpenUniversity are bracing (sic) the technology and its potential to revolutionise our research, teaching, and innovation”. 

How? Mike Sharples, Emeritus Professor of Educational Technology in the Institute of Educational Technology at The Open University is the co-author of Story Machines: How Computers Have Become Creative Writers agrees that “ChatGPT is designed to be plausible, it’s not designed to be accurate”. But he takes “a positive long-term view” of AI. What we may see as a flaw, the randomness of its output, was in fact a virtue: it was good for creativity. It was the equivalent of Brian Eno’s Oblique Strategy cards, or the I-Ching, only vastly more computationally expensive. 

Generative AI poses a particular problem for organisations with a foot in two camps. The corporate giant Pearson not only owns the examination board Edexcel, but is a supplier of classroom material, too. As the Financial Times recently noted, it has shifted from “a dwindling textbooks business towards a company that provides digital services to students and also serves the workplace training market.” 

 “We’re already looking beyond ChatGPT-3 into the opportunities that future generations of generative AI are going to provide to us as a company,” explained Pearson’s CEO, Andy Bird, earlier this year, ominously. A recent webinar gives a flavour of Pearson’s enthusiasm. One offered insights into “how ChatGPT is already transforming campuses and classrooms” with “an exploration of how ChatGPT can be seen as an ally instead of an adversary in teaching and learning”. 

“Pearson has been using AI in learning products successfully for 20 years and is committed to being a thoughtful and considered user of the technology,” the company told The Critic in a statement. “We’re thinking broadly about the ways generative AI can help people across their lifetime learning needs, looking at both the opportunities and the risks.” So how would it avoid the obvious conflict of interests? The company declined to explain, even on background, adding in a statement that “This does not preclude our ability to support the delivery of high stakes examinations in the UK and other markets.”

That’s hardly reassuring to parents, or employers. “Allowing children to cheat is a stab in the back for those who work hard and conscientiously,” one parent of a sought-after comprehensive in North London told me. Ambitious but not wealthy enough to afford private education, these parents regard students using AI as a fraud, and expect the system to root it out. 

But they may be fighting a losing battle. Shares of publicly quoted companies across the sector fell sharply in early May, after one learning materials supplier said its income had been hit by the use of ChatGPT. Pearson’s shares fell 15 per cent. As education companies scramble to please a market in the grip of AI mania, and as educationalists welcome AI with the fervour of the UFO cults, to whom can they turn?

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Critic magazine cover