Why I fear this censors’ charter

Nadine Dorries’s chilling Online Safety Bill invites professional activists to wipe anything they deem wrongthink from the internet

Features

This article is taken from the June 2022 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.


It is hard to overstate just how sinister the Online Safety Bill is. The gravest threat to freedom of speech since section 5 of the Public Order Act 1986, which criminalised “insulting words and behaviour”? That scarcely does it justice. Let’s settle on the most serious threat since the proposal to force state regulation on the press in the aftermath of Levison.

The Online Safety Bill, which has already had its second reading in the House of Commons, is intended to make the UK the safest place in the world to go online. If you think “safest” is code for “most heavily regulated” you’re not far wrong. 

The Bill will empower Ofcom, the broadcast regulator, to fine social media companies up to 10 per cent of their global turnover if they fail to remove harmful content — and not just harmful to children, which is hard to argue with, but to adults as well. 

What does the Government mean by “harmful”? The only definition the Bill offers is in clause 150, where it sets out the details of a new Harmful Communications Offence, punishable by up to two years in jail: “‘harm’ means psychological harm amounting to at least serious distress.” 

Stuff it is perfectly legal to say and write offline will be prohibited online

But, confusingly, it won’t just be harmful content that meets this definition that the bill will force social media companies to remove. After all, this relates to a new criminal offence — and content that meets the threshold for prosecution under this new law will, by definition, be illegal. Notoriously, the Bill will also force social media companies to remove “legal but harmful” content — and exactly what that is, is anyone’s guess. I’m sure political activists and lobby groups claiming to speak on behalf of various victim groups will have a lot to say about it. 

The bottom line is that stuff it is perfectly legal to say and write offline will be prohibited online. And not just mildly prohibited — YouTube or Twitter or Facebook could be fined of up to 10 per cent of their annual global turnover for a transgression — so in Facebook’s case $11.7 billion, based on its 2021 revenue. 

That’s a powerful incentive for social media companies to remove anything remotely contentious — and they hardly need much encouragement. Facebook deleted 26.9 million pieces of content for violating its Community Standards on “hate speech” in the first quarter of 2020, 17 times as many as the 1.6 million instances of deleted “hate speech” in the last quarter of 2017. 

More than 97 per cent of Facebook’s purged “hate speech” in the last three months of 2020 was identified by an algorithm and removed automatically. It’s a safe bet that the sensitivity dials on the algorithms social media companies use to censor questionable content will be turned up to 11 if this Bill ever becomes law.

Might this have a chilling effect on free speech? Absolutely not, says Nadine Dorries, Secretary of State at the Department for Culture, Media and Sport, and the Cabinet minister responsible for the Bill. According to her, it will actually strike a massive blow for freedom of expression.

Let’s consider the Secretary of State’s arguments.

First, she points out that the new Harmful Communications Offence will replace a raft of censorious communications offences, such as the Malicious Communications Act, which free speech advocates like me have been campaigning against for years. That’s a win, according to DCMS. 

In my capacity as general secretary of the Free Speech Union, I’ve met with DCMS ministers and officials and they’ve assured me that this new criminal offence, which revolves around the psychological effect of words rather than their subject matter, will be more permissive than the laws it replaces. In effect, even fewer people will be prosecuted for sending unlawful communications after the Bill is passed than are prosecuted now (which isn’t very many).

Second, Dorries claims there’s nothing in the Bill as it stands requiring social media companies to remove “legal but harmful” content. That’s a red herring, according to her. When the big social media companies — Category 1 providers, as they’re referred to in the Bill — submit their terms and conditions for Ofcom’s approval, there will be no requirement in the Bill for them to promise to remove anything other than unlawful content. 

Of course, she adds, if social media companies want to prohibit legal content in their T&Cs, they will be free to do so. But there will be two important constraints when it comes to the scope of their content moderation policies:

  1. They must grant special protection to content of democratic importance and journalistic content; and 
  2. When removing lawful content, they must “have regard” to the importance of freedom of speech.

Thanks to these constraints, she argues, the Bill should be welcomed by free speech advocates because at the moment there are no obligations on social media companies to protect content of democratic importance or journalistic content or to “have regard” for freedom of speech. For the first time ever, these providers won’t have complete latitude when it comes to content moderation.

For folks in Scotland or Northern Ireland, it’s lose-lose

Third, once they’ve agreed their T&Cs with Ofcom, social media companies will be under a legal obligation, imposed by the Bill, to enforce them “consistently”. Dorries believes this will go some way to eliminating political bias in the manner social media companies apply their content moderation policies — by prohibiting the application of them to right-of-centre posts but not left-of-centre posts, for instance. So, Twitter would no longer be able to kick Trump off the platform “due to the risk of further incitement to violence”, but permit, say, an antifa account calling for attacks on an ICE detention facility.

Let’s look at each of these arguments in turn. The first point to be made about the Harmful Communications Offence is that it will only apply in England and Wales because communications law is a devolved area of legislation. In my meetings with ministers and officials at DCMS, the replacement of various communications offences by the new offence was presented as a quid pro quo — yes, some speech that is currently allowed may be prohibited by this Bill, but some speech that is currently prohibited will be allowed, so what you lose on the swings you gain on the roundabouts.

But where are the gains if you live in Scotland or Northern Ireland? For those folks, it’s lose-lose. That’s particularly true when you bear in mind that Holyrood recently passed the Hate Crime and Public Order (Scotland) Act, which criminalised vast swathes of speech that is still legal in the rest of the UK, and Northern Ireland is about to bring forward a Hate Crime and Public Order (NI) Bill, which is even more censorious than the Scottish legislation. The effect of the Online Safety Bill in those countries will be to force Category 1 providers to remove any content that falls foul of these draconian new speech restrictions, as well as “legal but harmful” content, without any compensating liberalisation of communications offences.

Incidentally, the fact that social media companies and search engines will have to comply with a more elaborate set of laws in Scotland and Northern Ireland about what people can and can’t say online, or risk being fined up to 10 per cent of their global turnover, will be a powerful disincentive for dot com entrepreneurs thinking about setting up new businesses in those regions. 

True, start-ups won’t be within scope of the new regulations, but if they become successful they will be. Maybe that’s deliberate on the part of Westminster, although it looks like an unintended consequence. You’d think the SNP and Sinn Fein would have something to say about it.

Even setting that argument aside, is the Harmful Communications Offence really so much better than the laws it replaces? The ministers and officials may be right that fewer people will be prosecuted under this new law in, say, the first year of it being applied. But for how long after that? Defining “harm” as “psychological harm amounting to at least serious distress” sounds suspiciously subjective. How are courts going to determine whether online content meets this threshold?

There’s no clinical definition of “serious distress” in the Bill. If a Muslim woman stands before a judge and says she was caused “serious distress” by a Douglas Murray tweet, are we confident the judge won’t believe her? What about the mantra that we should always “believe victims”? I fear that this new offence may be a Trojan horse that will enable activist groups to smuggle entirely subjective tests of “harm” into the criminal justice system.

Nadine Dorries’s second point — that there’s nothing in the Bill that will compel Category 1 providers to remove “legal but harmful” content — is a sleight of hand. The Bill requires the Secretary of State to bring forward secondary legislation in the form of a statutory instrument that will identify “priority” harms that providers will be under a particular obligation to protect us all from. It will be in this secondary legislation, not the Bill, that the “legal but harmful” content will be identified.

Moreover, while the Bill does offer providers a liberal get-out when it comes to harmful content — they will have the option of “recommending or promoting the content” — it is so obviously undesirable as to be a dead letter. What provider would actively recommend content deemed harmful?

So the Secretary of State is right in that the Bill won’t specify the “legal but harmful” subject matter that social media companies and search engines will have to remove, but it will in effect create a general obligation on them to remove it once it’s been identified by her in a statutory instrument.

In the DCMS press release accompanying the Bill, much is made of the fact that the secondary legislation will have to be “approved by Parliament”. But one of the examples given in the press release of “legal but harmful” content that will be included in the statutory instrument is “harassment”.

This rang alarm bells with me. You can easily imagine Parliament approving this secondary legislation (it won’t be able to amend it because statutory instruments aren’t amendable), and activist groups then petitioning social media platforms to remove any content they find disagreeable on the grounds that it amounts to “harassment” of the victims they claim to be representing. If you don’t think that’s a concern, you are clearly unfamiliar with the work of the Muslim Council of Britain.

Incidentally, there’s a risk Dorries will create other opportunities for censorious activist groups in this Statutory Instrument, such as asking Category 1 providers to tackle “hate speech” and “misinformation”. That will open the floodgates. She has already told us that a Media Bill her Department is working on will prevent Neflix from streaming Jimmy Carr’s latest comedy special.

Even if we aren’t too concerned about Ms Dorries, who describes herself as a “free speech advocate”, what guarantee do we have that future secretaries of state at DCMS won’t apply much broader definitions of “legal but harmful” content when it’s their turn to bring forward statutory instruments? I’m not sure I trust Nadine Dorries to be mindful of free speech when drawing up her Index Librorum Prohibitorum, but I’m absolutely certain I don’t trust Dawn Butler or Chris Bryant.

Aha, says Dorries. That’s where the constraints on what Category 1 providers are entitled to remove come in. The clauses in the Bill that protect content of democratic importance and journalistic content, as well as the duty to “have regard” for free speech, cannot be overridden by secondary legislation.

But the difficulty with these protections is who defines what content they apply to? In the Bill as it stands, any “recognised news publisher” will qualify for the “journalistic content” protection and a publisher won’t have to jump through too many hoops to become “recognised”. True, it will have to be subject to a “standards code” and have “policies and procedures for handling and resolving complaints”, but these don’t have to be rubber-stamped by a regulator. There will be no official register of “recognised publishers” maintained by Ofcom.

The Bill is a gold embossed invitation to woke activists to fire off a barrage of vexatious complaints

That’s all well and good, but my worry is that the Bill will create a regulatory apparatus that only needs the slightest tweak by a Labour government to effectively introduce state regulation of the press by the back door. What guarantee do we have that the Bill won’t be amended in future to limit the “journalistic content” clause to just those news publishers overseen by an Ofcom-approved “independent” regulator? The Bill also includes a definition of “content of democratic importance” — “content [that] is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of the United Kingdom”.

That reference to “the United Kingdom” is ominous. Does it mean posts about other countries — the crisis in Ukraine, for instance — won’t be protected? And how do you define “political”? Will the “political” protection extend, for instance, to refusing to use the preferred gender pronouns of a trans person, or just cover narrow partisan debates? Facing swingeing fines for being too permissive, Category 1 providers will err on the side of caution and set their algorithms accordingly.

What about the need to have “due regard” for free speech? M’learned friends assure me that in the hierarchical table of legal duties, “have regard” sits at the bottom of League Two, whereas the duty to protect people from “priority content that is harmful to adults” is at the top of the Premier League. Don’t expect Man City to lose to Scunthorpe if the two duties clash. 

Finally, we come to the obligation the Bill will impose on providers to impose their T&Cs “consistently”. That sounds promising, but it won’t prevent a big social media platform from introducing a censorious content moderation policy provided it does so “consistently”. 

Take YouTube’s “Covid-19 Misinformation Policy”. It defines “medical misinformation” as any content that “contradicts guidance from the WHO or local health authorities” when it comes to “treatment, prevention, diagnosis, transmission, social distancing or self-isolation guidelines and the existence of Covid-19”. 

In effect, YouTube will remove any videos that challenge the official narrative, regardless of how factual or well-evidenced they are. I know this because in 2020 YouTube removed a recording of a discussion between Michael Levitt and me, even though Levitt is a professor of structural biology at Stanford and the joint winner of the 2014 Nobel Prize for Chemistry.

So, yes, the Online Safety Bill is a censors’ charter — a gold embossed invitation to woke activists to fire off a barrage of vexatious complaints to cleanse the internet of “hate speech” and “misinformation”, i.e. anything they disagree with. What can we do about it? 

Our best hope is to persuade the government to ditch the worst parts of the Bill — the “legal but harmful” stuff, for instance — and try to improve the rest as it goes through the Parliamentary sausage machine (at last count the Free Speech Union had a list of nine amendments it’s hoping to get through). 

Will this strategy succeed? I fervently hope so, but I’m not that optimistic. I fear the government will realise its ambition of making Britain the safest place in the world to go online — and then reap the whirlwind.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover