Photo by Dan Kitwood/Getty Images

The criminalisation of private speech

Parliament and the courts are engaged in a creeping campaign of mass surveillance and censorship

Artillery Row

Six former police officers who served in the Met’s Parliamentary and Diplomatic Protection Command during their time with the force have been charged with sending grossly offensive and racist messages by public communication and will appear at Westminster magistrates court next month.

The officers, who retired between 2001 and 2015, have been charged with offences under Section 127 (1) (a) of the Communications Act 2003. It is alleged all six were members of the WhatsApp group, and that the messages that led to the criminal charges were sent and received between August 2018 and September 2022, after their service in the Met had ended.

The charges follow a BBC Newsnight investigation late last year into dozens of messages shared within the chat that the producers were handed by a member of the group. Subsequent media coverage of the case led, in turn, to an investigation by the Met’s Directorate of Professional Standards. 

Carte blanche to judges to criminalise anything they think is unpleasant or hurtful

Although the BBC decided not to reproduce the messages because some of them contained “strong racial slurs”, Newsnight reported that the Duke and Duchess of Sussex featured in several images alongside racist remarks. Some of the posts also referenced the Government sending migrants to Rwanda, while others joked about recent flooding in Pakistan, which left almost 1,700 people dead.

There’s no question that messages of this kind would make for grim, deeply unpleasant reading – indeed, had the members of this group still been serving police officers, you could certainly have made a good case for the Met suspending or expelling them for breach of contract. But the key point here is that the intended recipients of these messages were adult members of a private WhatsApp group.

What’s so troubling about the CPS’s decision to charge these six former police officers under s.127 is that it provides yet more evidence of a form of legislative “mission creep”, with the state now looking to use the Communications Act 2003 to police not just public, but private interactions.

As things stand, s.127 makes it a crime punishable by up to six months in prison to post anything “grossly offensive” on an “electronic communications network”. Specifically, S.127 (1) (a) reads as follows: 

A person is guilty of an offence if he—

  • sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character.

Before we even start to consider the current (mis)application of this provision to private conversations, it’s worth pointing out that there are several things wrong with it as a means to police public communication.

First, any idea of what constitutes “the gross” and “the offensive” is, by definition, a matter of opinion – a situation which has essentially granted carte blanche to judges to criminalise anything they think is unpleasant or hurtful.

Worse still, s.127 is regarded as largely compliant with Article 10 of the European Convention on Human Rights, because statements made by means of a public telecommunications system currently need to have artistic or political meaning to receive its protection. Inevitably, that means that in most cases any appeal to human rights laws is closed off (although that’s something the FSU hopes to challenge with one of its impending cases).

Finally, there’s the fact that the legislation presupposes, but never articulates, its understanding of a concept that is crucial to any sophisticated approach to communication, namely, the “recipient” – and, more specifically, “the intended recipient”. 

These problems notwithstanding, in recent years police and prosecutors have jumped at any opportunity to enforce this law whenever someone complains that they feel hurt by what they have seen online or on social media. People who’ve been brought to heel in this manner include Scottish comedian Count Dankula (convicted of a hate crime for filming a pet dog giving Nazi salutes), Kate Scottow (fined for being rude to a trans activist on social media), Caroline Farrow (threatened with a criminal record for misgendering a trans activist) and Joe Kelly (someone the FSU is currently supporting in his appeal against a conviction for a social media post in which he rejoiced at the death of Captain Sir Tom Moore).

And yet, however shocking we may find these cases, there is at least a certain logic to the application of the law.

In each case, we are dealing with an utterance sent by means of a “public electronic communications network” that had as its intended recipient “the public”.

That’s in marked contrast to the case of the six Met police officers, where the messages in question were never intended to be seen, heard or read by the public. 

What that means is that when this case reaches Westminster magistrates court next month, a judge will essentially be ruling on the entirely hypothetical question of whether certain messages might be considered “grossly offensive” to a public that never actually encountered them, and was never in any danger of encountering them. The fact that in the preceding sentence you can replace the word “messages” with “thoughts” without any loss of meaning gives an indication of just how troubling this application of the law really is. 

Pursue that thought to its logical conclusion, and we really will be living in an Orwellian society

Another way of looking at it is this: Think for a moment of “communication” as a phenomenon that exists on a spectrum, with “personal thoughts” of the kind one might scribble in a personal diary at one end, and “public speeches” delivered to large audiences at the other. The type of communication that these six Met police officers indulged in is towards the former end of the spectrum. To say that it falls under the auspices of s.127 is essentially to push the boundaries of the law closer towards thought policing than anyone who believes in the basic tenets of liberal democracy should feel comfortable with. 

We got a taste of what the policing of private communications looks like last year, when Paul Bussetti was handed a 10-week suspended sentence for filming a video of a burning cardboard mock-up of Grenfell Tower and sending it to a group of friends on a private WhatsApp group (Evening Standard). Two people have since been given prison sentences for sharing an offensive video in private groups on Snapchat (Mirror).

Back in February, two Met Police officers were also convicted under s.127 and sentenced to three months behind bars after being found guilty of sharing racist, homophobic and misogynistic WhatsApp messages. Remarkably, the sentencing judge in that case found the fact that the policemen only conversed in private not so much a mitigation as an aggravation. In being covert, the judge claimed, their comments were even more damaging than if they had been made in public. Pursue that thought to its logical conclusion, and we really will be living in an Orwellian society. 

Do the powers-that-be not see any problem in this mission creep; that is, the intrusion of the law into people’s private electronic communications? 

It’s an interesting question, not least because of provisions included in the Online Safety Bill that allow for the identification and removal of online child sexual exploitation and abuse material (CSEA).

Earlier this year, Signal joined WhatsApp in threatening to leave the UK if this controversial and much-delayed legislation forces it to break users’ privacy – the legislation paves the way for ‘client-side scanning’ (CSS), an algorithmic process in which end-to-end encrypted messaging apps automatically scan all private chats, messages, texts and images sent from that ‘client’s’ phone before encrypting them, looking for suspicious content which could then automatically be reported to the police.

At issue is a clause in the Online Safety Bill that requires tech firms to make their “best endeavours” to deploy new technology to identify and remove CSEA if existing technology isn’t suitable for that purpose on their respective platforms.

The technology could be subject to “scope creep” once it’s installed on phones and computers

The Bill did already contain a proposal to give Ofcom – the regulator tasked with enforcing the new regulations – the power to require deployment of existing “accredited technology” for that purpose. Under the revised version of the legislation, however, Clause 110 enables Ofcom to demand that tech firms deploy or develop new technology to help find abuse material and stop its spread.

What’s worrying WhatsApp and Signal is that Ofcom will now ask online services providers to carry out the (relatively) new technique of CSS. 

The fundamental problem with CSS for companies like Signal and WhatsApp is that it undermines their unique selling point, namely, secure, end-to-end encryption, which means only the sender and receiver of a message can read its contents.

In fact, Clause 110 also raises important questions about the future of online free speech and freedom of expression in the UK.

As a recent open letter by academics on the dangers of eroding encryption reminds us, when we talk about ‘client-side scanning’ what we’re essentially dealing with is a “police officer in your pocket”.  It’s true that at the moment the idea is to give that officer strict instructions to focus on the noble aim of ensuring bad actors can’t share child sexual abuse material.

But as critics point out, the technology could be subject to “scope creep” once it’s installed on phones and computers, so it isn’t just used to search for illegal content of this kind.

That’s obviously a worrying possibility, not least because as we’ve seen, the idea that s.127 of the Communications Act 2003 constitutes a useful tool with which to police new forms of online private communication between consenting individuals is quickly becoming institutionally normalised.

Is it too far-fetched to wonder whether there’s a danger of CSS one day finding its way into an amended version of the Communications Act, or any superseding legislation?

What we know for sure is that, technologically speaking, client-side scanning is an extremely nimble, malleable form of surveillance. The same could be said of any algorithm-based system currently defined by the Online Safety Bill as “accredited technology”.  

From the perceptual hashing and hash matching systems that drive CSS (in which a fingerprint of an image is compared with a database of known harmful images) through to ‘keyword filtering’ (in which words that indicate potentially harmful content are used to flag content), and cutting-edge natural language processing techniques like ‘sentiment analysis’ (essentially a form of logical positivism on steroids, in which messages are scanned for the expression of either ‘positive’ or ‘negative’ opinions, as well as specific feelings and emotions); in each and every case, the algorithms scaffolding digital content filtering are easily re-trained to flag various, fairly specific types of content.

In other words, any shift in emphasis – any change in the ‘police officer in your pocket’s’ orders, as it were – would be a purely technical matter.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover