Photo by Leon Neal/Getty Images

The shadow side of open source AI

Stable Diffusion, Emad Mostaque and the complex moral implications of AI

Artillery Row

The history of open source is entwined with the history of the internet itself, with the concept championed by individuals, tech start-ups and anyone with libertarian leanings. It brings to mind freedom, community and optimism. Big tech companies that don’t release their source code are seen as the bad guys, dollar signs in their eyes, fighting progress.

What happens when open source meets AI?

Only good things, Emad Mostaque, CEO of the UK company Stability AI, will tell you if you invite him onto your podcast. In fact, openness in AI is not simply good but essential, Mostaque regularly opines, waving his hands around. Some other companies, such as Meta, take a similar position — see Nick Clegg’s recent paean to the subject in the Financial Times — but Mostaque is open source’s greatest advocate amongst AI tech leaders. Without openness, he claims, the positive promise of AI will remain unfulfilled, and the dangers can only be mitigated through complete transparency.

My research reveals a far more ambiguous story, one of hypocrisy and half-truths.

Paedophiles were early adopters of generative artificial intelligence. In March 2023, browsing the public gallery of Midjourney, an AI image-generation program, I encountered large volumes of disturbing sexual content featuring children. Hundreds of these images were being generated and released every minute via the supposedly child-friendly platform Discord. Often, clothed images of real children were being uploaded and used as image prompts. Paired with certain text-based prompts, these resulted in sexualised versions of the original images. I worked with The Times to report on these findings.

Midjourney’s images pale in severity compared to the content that can be created with Stable Diffusion, however. This leading AI image-generation tool was released by Stability AI.

Publishing the source code has directly led to a proliferation of AI-generated child sexual abuse material

Unlike Midjourney, Stable Diffusion is open source: the source code is public and can be downloaded by anyone. Consequently, the default safety guardrails can be easily deleted, and the program can be modified and re-trained on new image datasets offline. This is not the case with other mainstream AI image generation tools, making Stable Diffusion the program of choice for those looking to generate artificial child sexual abuse material, or AI-CSAM (the term “child pornography” is no longer used by experts). David Thiel, Chief Technologist at the Stanford Internet Observatory, recently told Rolling Stone that everybody generating AI-CSAM “is effectively using something derived from Stable Diffusion 1.5” because of its open source nature.

This supports my own findings. In June I worked with the BBC to expose a commercial trade in AI-CSAM operating on the surface web (the portion of the internet indexable by search engines), via social media pages and creator subscription sites such as Patreon. The most frequent tag on the images was “Stable Diffusion”.

Evidence suggests that Stability AI’s decision to publish the source code of Stable Diffusion has directly led to a proliferation of AI-generated child sexual abuse material. The specific harms of such content include but are not limited to: the emergence of online hotspots on the surface web where paedophiles connect and share links to non-AI child sexual abuse material, gateway risks for the abuse of children offline, and grooming risks. This is all in addition to the complications for law enforcement attempting to identify real children in need of protection. A paper on AI-CSAM co-authored by Thiel focused almost exclusively on Stable Diffusion, observing, “The open-source model of Stable Diffusion was unfortunately released without due care for the safety of the public.”

How was that decision made, by whom and on what grounds? Back in August 2022, software developer Bakz T. Future was battling it out with Emad Mostaque on a Discord chat. Mostaque had recently announced his decision to release the source code for Stable Diffusion, and Bakz was trying to delay the release on ethical grounds. When Bakz made his concerns public, he received a torrent of abuse from the community that had formed around Stable Diffusion. Bakz’s “Statement on Stable Diffusion” was written, in his words, “to share my position, my side of the story, and clear up any misconceptions”. The statement and the accompanying “Timeline of Events” tell the story of a lone Cassandra, mocked and derided by his peers and dismissed by Mostaque.

Bakz had numerous reasons to be worried. His statement describes the early days of Stable Diffusion’s Discord server as “rampant with offensive, disturbing, and/or horrific content”, such as photos of violently beaten women. “Obsessive” images of Emma Watson were being generated. Given the heavy moderation required to control this, Bakz couldn’t understand why Stability AI was “releasing this model right away onto the whole world’s front yard”. Published screenshots reveal Mostaque flitting between ethical positions over the course of the debate.

Early on in the discussion, Mostaque states, “I am very much in the personal responsibility and law is there for a reason camp” — implying that what people do with the model is unrelated to his own decision to release it.

When Bakz refuses to accept this argument, writing of his own concern that “we will be dealing with the aftermath of this model in particular for years to come”, Mostaque simply responds, “yes but the net benefit will be 99 per cent positive.” When asked how he could make such a calculation, Mostaque says:

it’s because I believe people are inherently good whereas AI ethicists believe people are inherently bad. I also say this as a qualified religious scholar who helped set up the university for cross cultural ethics at the Vatican.

Focusing on the “net benefit” assertion, Bakz poses a specific scenario. “So if a celebrity gets depicted negatively and in a NSFW context stable diffusion and kills themselves over body image issues? Does your math still check out? It’s worth it to release a model prematurely?” Mostaque immediately replies: “well how utilitarian do you want to get”.

This jarring remark — was that a “yes”? — is quickly followed up with arguments in support of the model release. These include “we are putting this in mental health settings and have saved lives already” and “we have the remit to revamp the entire education system in Malawi and other countries”. (Claims like this are typical of Mostaque, whose alleged “history of exaggeration” was the subject of an investigation by Forbes, his response to which is here.)

On the subject of AI-CSAM, Bakz writes in his statement:

By my understanding, the model is capable of generating illegal child abuse images as well. Is this also encouraged by their organization under the NSFW label? At times, I see that Emad is discouraging this publicly on their discord and the licensing terms do not allow for illegal content, however, by releasing the model his actions say a lot too, so I just don’t know what to think.

It is indeed hard to know what to think, when faced with those screenshots. Mostaque’s basic moral arguments in support of the imminent open source release might be summarised as follows:

  1. People are inherently good.
  2. But if they are bad, my decision to release the model will still be worth it.
  3. Anyway, what people do with the model is not my responsibility.

This is childish and incoherent logic. Taking a utilitarian ethical approach to the open source decision required Mostaque — by definition — to assume responsibility for both the positive and negative uses of Stable Diffusion’s code. That responsibility cannot then be discarded, in the next breath or at a later date.

What did it matter that Mostaque’s position was nonsensical, though? Despite Bakz’s best efforts, the model weights were released on schedule — after being leaked on 4chan, that is. A month later, Mostaque said on a podcast that the moral approach of “AI ethicists” on the Stable Diffusion Discord server was “similar in a way to the logic — and I’m not comparing this directly — that means that Islamic scholars in Saudi Arabia say that women can’t drive because they’ll do bad things”.

Fast forward to today. Research has demonstrated an explicit connection between Stable Diffusion and AI-CSAM, but there have been no consequences for Mostaque. Why would there be? The open source decision gave Stability AI the power to shed legal and moral responsibility for harmful uses of the technology. When asked to comment on unfortunate investigations like my own, Stability AI simply highlights the guardrails in its own version and declares its support for law enforcement against those who misuse the technology: no distasteful utilitarian arguments required.

When asked about mitigating the harms of AI in a general sense, “open source” is actually Mostaque’s stock response. His Twitter timeline is full of specious statements like “open is safer and more resilient and we should have transparency into these powerful models that impact our lives”, along with memes in which the more superior character in the image is, of course, tagged “open source”.

All present “open source” as in direct opposition to closed or proprietary code — this binary is essential to Mostaque’s narrative.

It is not simply a question of “open” or “closed”, however. This is a false dichotomy.

Firstly, a company can release software immediately, or it can run ethical tests for months or years. There is a question of when to release a product openly, which the binary understanding ignores. Bakz T. Future was caricatured as an “ethics troll” based on a proposal to delay the open source release. It was considered irrelevant that he hosts a podcast on the subject of AI and creativity, or that he is a long-term supporter of the open source movement. To his critics, anything but full transparency immediately was sacrilege. Meanwhile, Google’s text-to-image diffusion model Imagen, developed in May 2022, is still inaccessible to most people due to concerns over misuse, and public source code at any point feels very unlikely. Not everyone at Google is happy — see this leaked internal memo — but could it be that, in the case of Imagen at least, the researchers’ ethical worries were genuine? (Like most major AI image generators, Imagen was trained at least in part using an open source dataset released by the non-profit organisation LAION — yet another demonstration that open and closed are not clearly defined categories.)

Secondly, “open source” can mean different things. Its traditional definition encompasses far more than just publicly available code: free redistribution, without licensing restrictions, has historically been central to the concept. With AI products, though, the term “open source” has begun to be used by some — like Meta — even when the traditional requirements have not been met. As a result, companies releasing the technology can benefit from the traditionally positive associations with “open source”, whilst failing to adhere to its original definition. Licence restrictions do not prevent criminals from using the product, but a tech start-up that wishes to use the code may not be able to do so, for example. The vagueness around the term means a big tech company can gain ethical brownie points, or street cred, for being “transparent” whilst still retaining a financial advantage. The research paper “Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI” exposed this concept, arguing that in many AI examples, “open source” is primarily rhetoric. “Companies and institutions can and have leveraged ‘open’ technologies to entrench and expand centralized power.”

Openness is multifaceted, rather than a binary, as further evidenced in Stanford University’s Foundational Model Transparency Index, which analysed ten “open” models, including Stable Diffusion, using different metrics of transparency. Stable Diffusion received 100 per cent in the category of “Model Access”, but scored 14 per cent in the category of “Risks”. There is no room in Mostaque’s memes for such nuance.

A further question is where source code is made public. Both Stable Diffusion first release and Meta’s Llama were leaked on 4chan in advance of their official releases. Security breaches of this kind are considered par for the course. When the same thing happened with Stable Diffusion’s latest version, SDXL, which can generate better quality images, more easily, with higher resolutions, a Stability AI staff member wrote on a subreddit thread, “we didn’t *want* it to leak but we knew it was obviously coming.” Right. Obviously.

There appears to be minimal scrutiny, in the press and in government

As things stand, “open source” can mean whatever AI companies want it to mean. As a self-affixed label, it brings both praise and impunity. Governments recognise the dangers of freely available AI, but that doesn’t mean they love closed source, because they too are shut out. UK deputy PM Oliver Dowden has declared himself a fan of open source, whilst a large focus of the AI Safety Summit was negotiating governmental advance access to models, with Rishi Sunak asserting that we shouldn’t rely on AI firms to “mark their own homework”. Morally speaking, who can we rely on? It is bordering on dystopian that Stability AI was welcomed to a recent Home Office event that focused on the problem of AI-generated CSAM, along with being a co-signatory of a new policy paper pledging to tackle the proliferation of such content. There appears to be minimal scrutiny, in the press and in government, of open source decisions themselves, with the ongoing debate focusing on open or closed in a general sense. We are neglecting crucial sub-questions like when to release, where to release ( … 4chan?) and by what definition of “open”.

On 26 July 2023, I joined the online official launch of SDXL. Anyone with a Discord account could join the launch, and there were approximately 1500 attendees. The Stability AI team led the event, and Mostaque gave a short speech. The chat was filled with a flurry of messages, many directed at Mostaque (his user handle is simply @emad), and I threw my own into the mix:

@Emad I’ve read various investigations into child sexual abuse material being generated with SD, I know before last year’s release your position with open source was to take a utilitarian approach and see illegal content as the fault of the creator not the model, is that still your position?

Mostaque didn’t respond to my question, but when one user commented, “The real test is seeing how accurate Emma Watson looks in 1.0”, he wrote back:

honestly its still pretty bad at Emma Watson and we have no idea why

For those that don’t know in the original beta a year ago (!) like 5 per cent of the prompts were Emma Watson, I think maybe it got overrrepresented [sic] in the tune dataset or something

Another user replied to Mostaque, saying “really? i could use her pretty well in 0.9”. Mostaque’s response was, “maybe my watson sense is heightened after too much modding last year”. (“Modding” in this context is monitoring and deleting banned content on online servers). A third attendee then responded to Mostaque’s initial remark, saying, “Oh no Emma is like a the thing to test!”. To this, Mostaque replied: “well see if you can figure it out”.

If leaks are inevitable, perhaps the debate of open vs closed source AI, even at an appropriately nuanced level, will become moot. Maybe a more pressing question is not what, where or when, but just who is making AI ethics decisions — and do they really care?

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover