Germline gene-editing for the purposes of reproduction is banned in most countries although many, including the UK, allow it for laboratory research
Features

Good and evil on the new frontier

Our current ethical guidelines are hopelessly inadequate for a new era of unimaginable technological change

This article is taken from the June 2022 issue of The Critic. To get the full magazine why not subscribe? Right now we’re offering five issues for just £10.


We’re entering an era of almost unfathomable, radical technological change. Elon Musk is getting monkeys to play Pong with just their brains. China is experimenting with genetically-enhanced “super-soldiers”. Neuroscientists in the States are developing “injectable nano-sensors” that can read our thoughts. Even the White House recently published a report discussing the “singularity” — the coming moment in time when “machines quickly race far ahead of humans in intelligence”.

Unless — as seems unlikely — all these technological developments end up coming to nothing, we’re quickly going to be confronted with a deluge of ethical predicaments. The twentieth century had the atomic bomb, the pill, and the internet. We’re about to get artificial intelligence, deepfakes, gene-editing, nanotechnology, bioweapons, brain-computer interfaces, and autonomous lethal drones — all at once.

What’s so scary is that we’re facing all these ethical challenges at a time of profound moral uncertainty

Yet if I asked you who, exactly, is setting the moral rules for these new technologies — who, in the world, is deciding, say, the appropriate uses for brain-computer interfaces — would you be able to answer? Take, for example, “xenobots” — tiny, artificially-made cellular organisms, dubbed, by the scientists who created them last year, the “world’s first biological robots”. These critters were produced by cobbling together small clumps of skin and heart cells taken from frogs. Heart cells naturally expand and contract, while skin cells remain static. Depending on where, then, the scientists placed the heart cells, the xenobot ended up paddling itself along on one of a handful of “pre-programmed” paths: in just the same way that clipping a motor to different parts of a dinghy makes it surge forward in a straight line or spin round in circles.

But then came the big surprise: these xenobots began, completely unexpectedly, reproducing autonomously in the lab, by a process called “kinetic replication” — something never before seen above the molecular level. 

“While the prospect of self-replicating biotechnology could spark concern,” CNN’s reporter acknowledged, “the living machines were entirely contained in a lab and easily extinguished, as they are biodegradable and regulated by ethics experts.” Who were these “ethics experts”? Who, for that matter, was deciding if Elon Musk can fiddle with monkeys’ brains, or if drones can be equipped with face-recognition technology and bazookas?

The answers prove complex. Each technology presents its own particular set of ethical challenges, which often need to be addressed with a piecemeal patchwork of regulations: voluntary ethical codes, non-binding treaties, existing laws applied retrospectively, and fresh legislation at both national and supranational levels. 

All of these are informed by, among others, scientists, lobbyists, academic ethicists, government panels, “tech ethics” consultancies, and, of course, public opinion. Pinpointing where decisions are ultimately made, therefore, is often impossible. But from the evidence in front of us, something in this vast, creaking ethical ecosystem simply isn’t working. 

In December, the UN Convention on Certain Conventional Weapons failed to reach an agreement on banning “killer robots” — autonomous weapons that can pick targets and kill them without human interference — due to opposition, according to Human Rights Watch, from Russia, the United States, India, and Israel. The first confirmed use of a “killer robot”, in Libya, was recorded in a UN report last March. The next Convention on Certain Conventional Weapons won’t be for another five years.

Cases of “ethics dumping” — where scientists travel to legally permissive countries to carry out morally dubious procedures — are skyrocketing. In 2016 an American geneticist, John Zhang, helped to create a “three-parent” baby using a technique outlawed in the States. Zhang simply whizzed across the southern border, since, in his words, in Mexico, “there are no rules”. 

Where “rules” are put in place, they’re increasingly non-binding. Last year, the US National Security Commission on Artificial Intelligence committed to banning AI systems from deploying nuclear weapons, and called on China and Russia to make similar statements, while admitting that none of it was legally enforceable. Similarly, the World Health Organisation has published rigorous “guidelines” for responsible gene-editing, but acknowledges that these are mere voluntary recommendations.

It’s becoming increasingly obvious that we’re ill-equipped for the technological changes coming our way. The question is what, if anything, we can do about it.

By the standards of most emerging technologies, the regulations for xenobot research proved to be fairly simple. Shortly after reading about the study, I wrote to Doug Blackiston, the project’s lead scientist at Tufts University in Boston, and asked him to run me through the ethical procedures involved.

The study, he explained, actually fell pretty squarely within the category of animal cell research, which is, in the States, already controlled, fairly strictly, at three levels: university, state, and federal. Experiments need to be signed off in advance by an Institutional Animal Care and Use Committee, comprised of, among others, scientists, veterinarians, clergy, and lawyers. Laboratories are subject to random spot checks by federal officials. In this instance, “ethics experts” turned out to be officials rigorously enforcing several layers of pre-existing law.

It was clear, talking to Blackiston, that his team took their ethical obligations seriously. But already, an obvious problem with the regulatory system was becoming apparent: that it only seemed to ensure researchers passed a few initial tests — that human participants consented, that animals didn’t suffer — with little or no regard to the long-term intentions of a project. This is a noble ideal, but can it survive the coming era? Unfortunately, we might soon have no choice but to ask our “ethics experts” not just to apply existing laws retrospectively, but to work out, upfront, which whole areas of research are simply too dangerous to pursue.

For now, xenobots remain the work of one team, in one country, following one set of rules. But for other technologies, regulation already varies wildly from jurisdiction to jurisdiction. Take germline gene-editing — procedures where alterations made to somebody’s DNA can be inherited by their offspring. Long considered a “slippery slope” towards the creation of designer babies, germline gene-editing for the purposes of reproduction is banned in most countries (although many, including the UK, allow it for laboratory research).

In Ukraine, however, regulation is lax, and the country has become a hotspot for scientists creating “three-parent” babies, using a technique in which mitochondrial DNA — technically inheritable — is taken from a third party and transplanted into embryos.

The only dependable way you could outlaw germline gene-editing would be to enforce a blanket, global ban. Right now, that looks unlikely. Calls for a moratorium on research have been dismissed over concerns it would simply lead to a gene-editing “black market”. 

One approach, proposed by UNESCO, was to give the human genome protected status, and therefore to treat inheritable genetic alterations as violations of human rights. So far, though, international treaties aiming to “protect the endangered human” have all been non-binding only.

Nonetheless, a “rights-based” approach does seem to be the current tool of choice for regulators. The White House is currently calling for public feedback on a drafted AI Bill of Rights. The EU has proposed to ban public face-recognition technology on the grounds that it violates data protection and privacy rights. 

The neurobiologist Rafael Yuste has been establishing “neurorights” for the proper use of neurotechnologies — including Elon Musk’s Neuralink. In 2017, Yuste and his team worked with the Chilean government to establish “cerebral integrity” — a basic right now signed into Chilean law via an amendment to its constitution. Yuste tells me he’s also been collaborating with the office of the Secretary General of the UN, who is, he says, keen to update human rights to include “frontier topics” — not just neurotechnologies, but the other many emerging technologies that will characterise the coming decade.

The obvious advantage of rights is that novel technologies can be swiftly tackled by quick amendments to existing legal frameworks. The EU, for instance, recently published a report outlining its plans to regulate deepfakes with current legislation, including data protection and “image rights”.

But there’s an awkward question: do rights actually work? After all, international support for human rights coexists with what the law professor, Eric Posner, calculates as more than 150 of the UN’s 193 member states engaging in torture.

Here, as well as in areas like women’s rights and slavery, he concludes, “human rights law has failed to accomplish its objectives.”

The problem, in the end, is ultimately philosophical. The whole project of human rights is balanced precariously on a single, narrow claim: that all humans share some fundamental, inalienable moral essence. If that goes, the whole thing collapses. But even in the West, that’s a premise many people no longer take seriously. Where would you actually find somebody’s “inherent dignity”? On a little polyester tag tucked down their shirt?

On the contrary, surely the only plausible source for this shared moral value would be something like an immaterial soul — an uncomfortably religious idea in our increasingly materialist age. Human rights might live on, for now, as useful social conventions, but it’s hard to see how they can survive an era of genetically-enhanced soldiers and superintelligent cyborgs.

A soldier might consent to being given super-human strength, but that doesn’t make it morally legitimate 

Which is what makes our situation so scary: we’re facing all of these ethical challenges at a moment of profound moral uncertainty. In the materialist present, nobody is really sure, any more, what words like “good” and “bad” even mean. Are they, as the philosopher A.J. Ayer put it, mere synonyms for “what I like” and “what I dislike”? Are they simply meaningless? Either way, we know of no alternative moral language, so we simply carry on trying to navigate our sci-fi-like future with words derived from religious traditions we’ve otherwise abandoned. As the Princeton bioethicist Allen Porter puts it, it’s the moral equivalent of Wile E. Coyote’s “failure to realise that he has run off a cliff”.

In the twentieth century, the hope was that some kind of professionalised ethics based on rational principles would replace the theological foundations of older medical ethics. But this simply failed to happen. As H. Tristram Engelhardt, an early bioethicist (albeit not one altogether popular among his peers), rather damningly wrote:

One must appreciate the enormity of the failure of the Enlightenment project of discovering a canonical content-full morality. This failure represents the collapse of the Western philosophical hope to ground the objectivity of morality. This failure bears against theories of justice and accounts of morality generally. It brings all secular bioethics into question.

Engelhardt concluded that the only bioethical principle one could salvage from Enlightenment thinking was the very narrow concept of informed consent — essentially, voluntary participation. Engelhardt himself thought this was a profoundly unsatisfactory outcome. But as a prediction of the direction professional bioethics would end up going in, he seems to have been spot on.

Indeed, it’s hard to think of any of the current attempts to regulate technology that don’t ultimately appeal to the principle of consent. The EU’s main gripe with face recognition technology is that it collects people’s biometric data without their consenting. The whole field of “neuroethics” revolves around the question of whether or not patients can continue to consent after having electrodes or microchips inserted into their brains. Even the central stipulation of the American Department of Defense’s regulatory code for biomedical research on soldiers is simply that “all participation is voluntary”.

But this only answers part of the problem. A soldier might consent to gene-editing treatment that gives him superhuman strength — but clearly that, by itself, doesn’t answer the question of whether creating biologically-enhanced “super-soldiers” is morally legitimate.

Rather depressingly, though, the primary critique of “consent” within the world of bioethics seems to come from utilitarians — who merely seek to replace consent with a different reductionist idea: the question of whether an action produces more harm than suffering. The grotesque implications of utilitarianism are well-rehearsed: take Peter Singer’s claim that killing disabled children can be morally justified if it results in a “net reduction” of suffering.

But even by its own terms, utilitarianism fails to provide an adequate moral framework for the future. A cold calculation of “harm done” just about works for tiny, discrete questions, like whether or not to steal food to feed a starving child (though even there, you might argue, the chain reaction of consequences makes it impossible to assess properly). 

But a question like “should we work towards being able to link the human brain to supercomputers?” simply crashes the utilitarian software. Some, like Julian Savulescu, think the solution is to enhance our rational faculties with drugs — to make us better at calculating more complex ethical problems. But this still doesn’t answer the fundamental problem of where right and wrong actually come from — something utilitarians famously struggle to do.

AI methods automatically design diverse candidate life-forms in simulation (top row) to perform some desired function, and transferable designs are then created using a cell-based construction toolkit to realise living systems (bottom row) with the predicted behaviours

Nonetheless, utilitarianism, the pretence of ethics without actual foundations, is in the ascendant in academic bioethics departments — and having a growing influence, too, on governments. In the Ministry of Defence’s report on Human Augmentation last year, for instance, both Singer and Savulescu were among the tiny handful of thinkers cited in the rather throwaway section (four pages out of a total of 103) titled “Ethical Considerations”.

Calum MacKellar, Director of Research at the Scottish Council for Human Bioethics, tells me: “UK academic bioethics has become very utilitarian … Generally in the UK, if something is (1) possible and (2) useful, then it is considered ethical.” This leads, he says, to the rather ludicrous situation of the government prohibiting “procedures that are scientifically impossible”, and then swiftly legalising them as soon as the science permits (often even if they’re morally controversial).

We thus seem caught between two desperately inadequate ethics: consent on the one hand, which acts as a sort of hoop-jumping exercise, with no attention paid to future consequences; and utilitarianism, which only pays attention to the consequences, but frames them in the most two-dimensional of terms: harm and suffering. Surely we need a more holistic ethical vision — one that begins every scientific challenge with a simple question: “what is the ultimate good?”

Short of some return to a theological underpinning for our ethics, though, it’s hard to know where this would come from. The only consensus we seem capable of reaching currently is not reaching a consensus. The tech ethics consultancy Hattusia, for instance, writes on its site: “We believe in pluralism, and so we believe there are multiple ethical frameworks and models which could work in society”. If that’s as good as we can do, we should put all research on hold — immediately.

Could we, though? The very idea of turning our backs on Scientific Progress seems scandalous to us. Even banning something like deepfakes, which seem almost certain to cause far more harm than good, appears unfeasible. Science has become an end of its own. We must simply proceed at full speed, and deal with the fallout — however fatal — retrospectively.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover