Artillery Row

The digital dirty war

Ukraine is partnering with Silicon Valley to digitally desecrate the dead

The propaganda war in Ukraine has entered a new and disturbing phase. It’s long been known that Russia has misled its citizens about the scale of casualties, and has in many cases failed to inform families of combat deaths. There’s no question that Russia’s invasion of Ukraine is a monstrous act of unprovoked aggression, and that lying to families of Russian soldiers is a terrible breach of trust and a wicked betrayal of its own military.

But what Ukraine has done in response deserves to be called what it is: a war crime. Ukraine’s “IT army” (a volunteer group of hackers and activists) has partnered with US tech company Clearview AI which has provided free use of its sophisticated facial recognition software. It is using this software to discover the identities of dead Russian troops, and send images of the corpses to the soldier’s families.

We know it is doing this because not only are those involved open about their tactics, the “IT army” has published the online conversations had with Russian parents.

One Russian mother was sent a message informing her that her son was dead, accompanied by a photo of his body in the dirt, face grimacing and mouth agape.

“Why are you doing this?” she replied. “Do you want me to die? I already don’t live. You must be enjoying this.”

This is a grotesque new frontier in the history of warfare, so why is it exactly that so many are lining up to justify it? Why, more to the point, is a major US tech firm facilitating what may well be a war crime — and why is nobody in authority doing anything to stop it?

The Geneva Convention has always enshrined honourable treatment of the dead, and makes detailed provision for how bodies are to be treated, identified and how information on war dead is to be shared. Mutilation of corpses is a well understood violation of international law, as is the intimidation of civilian populations. But these legal checks on interstate violence date to the World Wars, and twentieth century-style conflicts.

Taking a photograph of a dead enemy combatant and messaging it to their relatives seems to fulfil many of the definitions of a war crime, but the use of technology has rendered the situation ambiguous.

Death is delivered by drone and directed by committee

Part of the problem is the blood chilling implications of technology’s capacity to remove us from the effects of our actions. This disconnect gives us the illusion of moral innocence, insulating us from responsibility for harm done at a far remove. If Allied troops had gone into German towns, doused civilians in kerosene and set light to them it would have been a source of horror and shame — but that’s exactly what the allies did do, except on a much larger scale, and crucially from the air. Violence at a remove is no longer seen as violence, or it is at least not seen as barbarism.

Hence My Lai, with its several hundred dead, lives on as a mark of undying shame, but the tens of thousands blown apart by US bombs in that same conflict go forgotten. Today the removal of human agency has reached new heights — death is delivered by drone and directed by committee.

There is no real moral distinction between what the Ukrainian “IT Army” is doing online and the mediaeval tactic of catapulting the severed heads of enemy combatants over the castle walls. But because this process can be made indirect, remote and hygienic, we lose our capacity to experience it as horrifying.

As mechanised death escapes moral responsibility and human control, blurring the lines between battlefield and home front, it has also started to abolish the distinction between peace and war. Speech and thought are subtly weaponised. Social media, and the powerful data collection tools that go with it, allow our words to be monitored, monetised and militarised. Without the normal mediation of embodied interaction, ethical and social constraints around speech slip away, and opportunistic state and market actors can seize the reins instead.

This is where Clearview AI steps in. Clearview is a growing and controversial company, one fanatically convinced of the Silicon Valley creed of radical openness, relentless progress and economic disruption. The company is focused on using AI combined with data harvested from social media to produce incredibly powerful facial recognition software that can be used by everyone from banks to law enforcement to locate a person, and much of their personal information, from a single photograph or frame of video.

Clearview, founded by Australian tech entrepreneur Hoan Ton-That, operated behind the scenes for years, only coming to widespread attention (and considerable controversy) following a 2020 expose in The New York Times. It’s heavily linked with rightwing free market think tank The Manhattan Institute, and is funded in part by Peter Thiel, while co-founder Richard Schwartz is a former aide to Trump ally Rudolph Giuliani.

Is its new venture in Ukraine an attempt at PR? Or even an idealistic gesture by a company that wants to help a country struggling to resist invasion? Both may well be the case, but a far more worrying agenda could be at play, and the implications are terrifying.

Up until now its controversial technology has been used by law enforcement and private firms, but its venture in Ukraine represents its first public use as a weapon of war, with the advanced software given a paramilitary application as a facilitator of psychological warfare. Is Clearview testing the military potential of its technology in a real world conflict under the umbrella of a humanitarian gesture to Ukraine?

Many military and intelligence experts in America have already noted that the terror tactic may be counter-effective, as well as immoral, with the story of cruel and heartless Ukrainians serving to bolster Russian determination and make plausible claims of anti-Russian genocide in Eastern Ukraine.

The tech barons’ revolt will end up empowering authoritarian regimes

The military logic is highly questionable, the emotional logic for the Ukrainians is obvious — but where does that leave Clearview? With commercial logic. Experts may deplore the efficacy or morality, but the power of the software, deployed in a war context, will not have gone unnoticed amongst the world’s military and intelligence services. Even if Clearview isn’t the one to do it, the potential of its tech, and software like it, will eventually be harnessed for espionage and combat.

Facial recognition technology is a crucial puzzle piece in unlocking automated warfare, allowing autonomous weapon systems to identify enemy combatants, or even assassinate individuals, without the intervention of a human controller.

Even if killer robots don’t take to the battlefield, it’s the war of words and ideas that may prove the most sinister arena of conflict. Already in America big tech has been a crucial accelerant to the politicisation and polarisation of every aspect of ordinary life. Marriages and families have been torn apart by political differences that would have once been occasional flare ups — but are now virtual trench warfare fought over Facebook, WhatsApp and Twitter.

Elections are more and more decided online by the efficiency of the tools used to harness personal data and “micro-target” demographics with personally tailored propaganda. All these tools have obvious uses in the context of inter-state conflict and rivalry, as we are increasingly starting to see.

Though these technological potentials were always going to have to be confronted, they are rendered especially lethal because they are being unlocked and exploited by the US tech giants of Silicon Valley, who combine a ruthless drive for profit with a utopian quasi-religious ideology.

It’s easy to imagine that quite apart from welcoming a chance to show off their technology in a military context, the people who run Clearview may well sincerely believe in the moral case for the use to which it is being put.

In the tech-utopian worldview sharing information is always good — it’s a perspective that goes back to Enlightenment rationalism — all problems are just a matter of faulty data. Even the wickedness that lives in men’s hearts can be explained by logical flaws in their thinking attributable to evolutionary psychology. You’re not evil, just maladapted.

If this is the way you see reality, sending pictures of dead bodies is no longer a grotesque attempt to humiliate and frighten civilians — you’re just making sure Russian mums have good data points.

Mass communication has gradually shredded the mediating layers of distance, culture and language that once distinguished us, but the result has not and will not be the secular humanist dream of a single planet united in harmony. Rather the result has been to massively empower state and market whilst weakening civil society, religion and the family.

Whether merely naïve or intentionally dangerous, the tech barons’ revolt against established norms and structures will end up empowering authoritarian regimes like Russia and China to further violate the privacy of their citizens. It will introduce insidious new forms of warfare and further divide and weaken the West.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover