Picture credit: Rasid Necati Aslim/Anadolu via Getty Images
Artillery Row

Britain must not overregulate AI

Staying competitive in the modern world depends on balancing its opportunities and its risks

A few weeks ago, my friend’s father had a heart attack. He isn’t overweight, isn’t unhealthy, and is only sixty-five. It was a shock to hear that he would need triple-bypass surgery. Though this was dramatic and upsetting to my friend and his family, it is an unremarkable story in the sense that someone is admitted to hospital with a heart attack approximately every five minutes in the UK. Heart attacks are common plot-devices in television programmes and novels. They are part of life and usually come as a surprise.

That might change soon. As artificial intelligence advances (AI), we are getting closer to a future where everyone will have a GP in their pocket. Our devices will be able to monitor our vital signs and provide early warning of a heart attack. Those extra minutes will often make the difference between life and death. In the West, we will have to endure political arguments about the ethics of providing such technologies; in those countries where people do not currently have access to a GP, such advances are more likely to be appreciated for the miracles that they are.

AI will not replace doctors, but it will change how they work. Recent studies suggest that AI is better than GPs at diagnosing eye diseases, for example, and could therefore be used to improve triaging. Instead of sitting in the waiting room while the GP runs late, you will first be diagnosed by AI, which will provide the doctor with essential information quickly and easily. Doctors can also use AI to check their prescriptions. The AI provides reasoning and information to assess the doctor’s decision. Again, this is a supplement, not a replacement. Lest we worry that AI makes mistakes, we should bear in mind that there are 237 million prescription errors every year, which kill 1,700 people. 

Similar uses of AI are being made in detecting patterns that alert us to credit card fraud, enabling more efficient logistics for supermarkets making home deliveries, improving email spam filters, scanning CVs in recruitment processes, identifying cancer cells, and measuring nutrient levels in soil. Human progress depends on constant fine-tuning and AI is about to provide us with a brand new set of tools and technologies to improve the way we work.

But only if we regulate it properly. This was the central question at a recent discussion between Reid Hoffman, founder of LinkedIn and investor in OpenAI (the company which created ChatGPT) and Matthew Clifford, AI investor at Entrepreneur First and former AI advisor to the Prime Minister (also the chair of ARIA, the government science funding organisation). As well as the miraculous uses to which AI will be put, it will, like any technology, create a new range of dark applications. Just as the same technology that allows us all to travel by road, rail, and sky was used to deliver death to millions, just as the telephone was a tool of social connection and a device for scammers to con the unsuspecting, so AI will have its dark side. Ships bring both trade and war, books convey both facts and lies, AI will bring great efficiencies alongside great dangers.

AI is perhaps the most significant development in politics for many decades. International relations are changed forever. What happens when China develops new capabilities? How will warfare change? Will election interference become more difficult to prevent? How will attacks like the one on the British Library happen in the future? We are living through a new Cold War and our enemies have been delivered one of the most powerful technologies ever invented. The question of how we regulate AI is central to our future not just because of its great potential for improvement, but also because it is now fundamental to national defence and security.

As Hofmann and Clifford said in their discussion, much of the solution to these threats will come from entrepreneurs, not government. It is well known that LLMs like ChatGPT often refuse to perform unseemly tasks, like creating fake images of terrorist attacks, but that they can be hacked. Already companies are launching that will help organisations make their LLMs safe against such hacking by anticipating their weak-spots. In this new world, this is the only way to be safe. We must develop AI fast enough to always be at the frontier.

The rest of the world will not slow down their development of AI and we need to move just as quickly as them

Clifford calls this “defensive acceleration”. The rest of the world will not slow down their development of AI and we need to move just as quickly as them. Legislation cannot prevent a cyber attack. Better technology can. But, this is not an argument for laissez faire development. Entrepreneurs must work in partnership with the government. As challenges and problems arise they must be dealt with swiftly; but too much caution in the regulatory regime will stifle the innovation we need to match the capacity of China and other countries. 

The sort of AI regulation we choose therefore has major consequences for the future of our society, not just because AI will become part of most workflows in the economy but because it is perhaps the essential technology in foreign policy — and the world is hardly stable right now. 

But who is talking about any of this in the election? We have Ed Davy pulling stunts and news reporters talking to Sunak about his childhood in tones more suited to the playground than the parliament. Journalists mock Reform for having criminal justice policies similar to a country with a lower homicide rate than any Western country than Canada. But very little about this most significant of developments.

Labour sees great potential for AI to boost economic growth and want to encourage businesses to improve their uptake of AI, something that seems to lag in Britain compared to the US. They also want to allow data-centres to be built on the greenbelt, an encouraging sign. It is encouraging to hear that Labour wants to make AI the heart of a new productivity boom in Britain.

Labour’s manifesto has reiterated that they will build data centres and create a National Data Library to “deliver data-driven public services”. They see the potential of AI in medicine for speeding up diagnosis and will allocate money for AI-enabled MRI scanners. 

But they have also promised to put the current voluntary code that currently regulates AI into statute, “so that those companies engaging in that kind of research and development have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us.” Peter Kyle, Labour’s shadow technology secretary, responsible for the party’s AI policy has said, “Some of this technology is going to have a profound impact on our workplace, on our society, on our culture. And we need to make sure that that development is done safely.” 

This seems like common sense, but that sort of political language too often leads to an over-cautious approach. Much will depend on the reaction the next government has to that data. Will they be comfortable with the defensive acceleration approach outlined by Hoffman and Clifford, or will they begin to recoil against each example, slowly building up the sense, as governments so often do, that more must be done to anticipate problems, to regulate them before they arise? As regulation develops, as it ought to, the necessity to stay at the cutting edge must be kept at the centre of decision making.

The AI researcher Dean Ball has distinguished between regulating the use of a technology and regulating the conduct to which the technology is applied. In the EU, for example, it is unlawful to produce an AI product that can read your emotional state unless the product is medical. This is a use regulation. Since ChatGPT can indeed read emotional states quite accurately, and isn’t a medical product, it is difficult to know whether it is now illegal to use in schools and offices. This is quite ridiculous. It should be, as it already is, illegal to do anything criminal with AI, but the EU seems to forget that we are already assessing each other’s emotional state all the time, especially at work. 

Ball sums up the use/conduct difference like this:

In one, policymakers can concentrate on what they want the outcomes of their laws to be for society; in the other, policymakers have to fret about every potential use case of a general-purpose technology.

Fretting about every potential use is what we need to avoid. If we had tried to anticipate everything that could go wrong with automobiles, innovation would have been stifled. Later additions like seat-belts and air-bags were essential, but they had to be iterated, not anticipated. There is no golden world where everything is safe and foreseeable. There is no option to regulate ourselves into a comfortable life. 

As Labour settles into government, they must remember their pragmatic, future-facing stance on AI regulation. If they don’t hold their nerve — and without more journalistic interest, who knows what they really think—we will end up following the EU into regulatory obscurity and international weakness. The world is what it is: we must choose to shape the future, not deny it, ignore it, or worry it to death.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover