Jeff Bezos, the CEO of Amazon, and owner of the Washington Post (Photo by James Leynse/Corbis via Getty Images)

Social media giants are platforms, not publishers

Conservatives who dislike Twitter’s politics shouldn’t make things worse

Artillery Row

‘I don’t care about Hitler, I care about black people,’ grime artist Wiley tweeted this week – oddly unaware of Nazi hatred for black people. This was just one part of Wiley’s explicitly antisemitic tirades this week that compared Jews to snakes and the KKK, and incited violence against them.

Wiley’s manager quickly cut ties and the police are investigating. But the biggest controversy was on Twitter. The social media company has been widely criticised, from Labour MP Jess Phillips to Home Secretary Priti Patel, for being too slow to respond to Wiley’s tweets. Twitter initially only suspended Wiley and left past tweets visible. In response, many prominent figures joined a 48 hour boycott of Twitter under the hashtag  #NoSafeSpaceForJewHate. Twitter, along with Facebook and Instagram, have now banned Wiley.

If we are to fight hatred, we need to viciously protect free expression as much as we condemn prejudice

Despite the abhorrent nature of Wiley’s bigotry, and my very personal interest in opposing antisemitism, I did not join in. Boycotting Twitter because of Wiley has two problematic premises: (1) that censorship is how we fight bigotry, and (2) that platforms like Twitter should be liable for user-generated content. Boycotters are right to point out Twitter’s inconsistent policy application, and how hatred against Jews is often downplayed in public life, but we should not celebrate increasingly harsh content moderation.

If we are to fight hatred, we need to viciously protect free expression as much as we condemn prejudice. This is because we cannot combat what we cannot see. Wiley is doing a public service by showing how antisemitism can still manifest itself today. We should welcome every opportunity to counter his bigotry, not seek to censor and create martyrs for conspiracy theorists.

Wiley performs at O2 Academy Brixton hours after receiving his MBE from the Duke of Cambridge at the 2018 New Year Investitures ceremony (Photo by Ollie Millington/Redferns)

Additionally, boycotting Twitter because of their failure to remove Wiley feeds into the idea that not just the user but also the platform are responsible for speech. This would be disastrous for freedom of expression and entrench the power of Big Tech.

This debate has been raging in the United States. Republican senators Ted Cruz and Josh Hawley argue that because social media companies are failing to protect free speech the ‘safe harbour’ provision in Section 230 of the Communications Decency Act should be withdrawn. Section 230 provides a liability exemption to the likes of internet service providers and social media companies for the user-generated content. In Britain, this principle is currently contained in Section 14 of the European Union’s e-Commerce directive.

President Donald Trump has backed withdrawing Section 230 in an executive order in May and a direction to the Federal Communications Commission (FCC) in recent days, because of what he sees as the bias of the social media giants. Trump is not alone. His opponent, former Vice President Joe Biden, has called for the section to be “revoked, immediately”. But this time for different reasons. Biden is concerned Facebook is “propagating falsehoods”. House Speaker Nancy Pelosi has also opposed Section 230 protections because of the failure of companies to address issues like hate speech and misinformation. There have been similar calls in the UK by those concerned about harmful content.

So Biden and other progressive politicians think withdrawing liability protections would lead to removal of content they dislike. While Trump and his allies believe that withdrawing the provision could protect free speech. They cannot both be right – and they are probably both wrong.

These exemptions are based on the same principle that means an author and publisher are liable for the content contained within books, but the bookstore is not. If the likes of Waterstones could be sued for defamatory books they would limit their offering to books that they had inspected. In practice, to avoid the risk of litigation they would remove significant numbers of (even legal) books.

Section 230 and Article 14 simply apply this idea to online speech: a platform, like Twitter and Facebook, is not liable for the content posted by users unless it’s are made aware of any illegality (thereby removing Wiley’s tweet saying that Jews should “hold some corn,” a slang expression for being shot, once reported by users). This has nothing to do with whether they are politically neutral or protect free speech. In fact, Section 230 was specifically designed to allow for content moderation without making the websites liable for what their users post.

This came after a court found that Prodigy’s messaging board service liable for defamatory content precisely because of good faith attempts to moderate content – while an earlier case concluded that CompuServe, another messaging board service, was not liable because they did not moderate. Lawmakers wanted hosts of user-generated content to create guidelines and moderate speech — to ensure they would not abstain from actively removing unlawful content for fear of becoming liable.

It was not a “subsidy” to these companies, as Cruz and Hawley assert, it simply applies a basic legal principle about liability: the person who wrote and published the content is responsible, not the platform. Nor, as Cruz and Hawley claim, does moderating content make them de facto publishers. Quite the opposite. Safe harbour provisions exist to allow platforms to moderate user-generated content — albeit something they should be doing less, not more of — while not becoming a publisher for everything people tweet.

These protections are extremely important: practically all the key online services we rely on today would not exist without this principle. It is difficult to imagine that a few college students could have developed Google if they could have been sued for the content in the search results. The same goes for social media companies, that come from humble beginnings, and would have been bankrupted very early from legal fees if they had to defend all the potentially unlawful posts by users. The likes of video conferencing service Zoom would also struggle if they could be sued for what users say during webinars and meetings.

Today, if we removed this protection it would not change the behaviour of the incumbent ‘Big Tech’ firms. They would not suddenly become bastions of free speech. In fact, the opposite: you would be forcing platforms to remove swaths of content to avoid legal liability — like a bookstore which limits offerings to inspected books. This was the case following Germany’s introduction of the NetzDG law which requires removal of “hate speech” within 24 hours or face an up to €50 million fine. The result was the removal of many legitimate expressions of opinion, including satirical posts making fun of bigotry.

Living in a free society means accepting that people you dislike have a right to express their opinions

Additionally, making social media companies liable worsens online competition. The likes of Google and Facebook grew without having to worry about liability. They can now afford the huge cost of building automated content moderation for NetzDG and other EU laws and guidelines, and even the occasional legal fees. It would mean a huge boon for lawyers, at the cost of innovation and research, but they would survive. Smaller platforms and start-ups would be strangled by red tape and legal fees. So, rather than “hurt” Big Tech, withdrawing liability exemptions risks entrenching their dominance for eternity. Not an ideal outcome if your starting point was that these companies are monopolists and censorious.

In the UK the calls to increase platform liability are largely coming from advocates of the proposed Online Harms regime: that would create an expansive “duty of care” on social media companies to remove both unlawful and, extraordinarily, “legal but harmful” content. If this goes ahead as expected, Ofcom will be our new speech overlords: free to decide what is ‘good’ or ‘bad’ speech.

The free internet as we know it is disintegrating before our eyes. It is always tempting to respond to falsehoods and bigotry with demands of censorship. But living in a free society means accepting that even people you dislike have a right to express their opinions – and the best way to fight those bad ideas is with better ideas. The solution is not to punish social media companies for your preferred grievance.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover