Discarded used face masks. On Thursday, 23 September 2021, in Edmonton, Alberta, Canada. (Photo by Artur Widak/NurPhoto via Getty Images)
Artillery Row

The dangerous charms of models

Mask mandates failed

Perhaps it’s a signal of my encroaching old age, but I’ve grown weary of models.

At the beginning, they’re seductive. They lure you in with impossibly perfect curves, every detail in the right place. They make their confident promises, plans for the future that seem too simple and too good to be true. 

That is what has happened to so many of the weak willed over the last two years. In an attempt to get a handle on the COVID pandemic, too many politicians and scientists were seduced by models promising to mitigate the infection curves with a variety of NPI’s (non-pharmaceutical interventions).

There are a dozen or so NPIs. These include social distancing, hand washing, quarantine, travel restrictions, symptom screening, contact tracing, and the most common and notorious of all: the mask mandate. Each one of these NPIs has been used to generate a model that promises to reduce either the severity, duration or impact of COVID. The benefit of relying on a model to estimate the effects of an NPI is that the model is never wrong. By its very nature, a model can’t be wrong. Models are designed to make certain assumptions about how much impact a given mitigation might have. They are then confidently applied to the outcome, with the declaration that this mitigation saved X lives or reduced the impact of this infection wave by Y amount. How do we know? Because, according to the model, it would have been worse without the mitigation.

There is a tightness to the loop in this circular reasoning that we almost have to admire. Many public health advocates have, throughout the pandemic, insisted that we look at the resulting infection numbers and judge them against what would theoretically have happened if we hadn’t masked up or quarantined or closed schools.

This is a plausible strategy to encourage mitigations, but it isn’t what we would consider the gold standard of science, because it is ultimately dealing in hypotheticals. It is positing a “could have been” scenario suggested by the model against the “as it was” scenario we observed in real life.

To get to the truth, we have to abandon the seductive and beautiful simplicity of the model and do the hard, ugly, gruff work of actually getting into the dirt and grime and measuring things.

The ideal form of the scientific data gathering is to run a randomised controlled trial. This is the strategy we learn about in school where we collect trial participants and split them into two groups through some form of randomised assignment. One group then participates in some intervention (take a pill, exercise twice a week, wear a mask) whilst the other group does not. Then we keep track of these two groups to see what the differences are.

Most people assume that, when they see a study endorsing the benefits of an intervention, this is what has been done. But this kind of study is incredibly difficult to conduct. Let’s take masking as our example. You can’t force people to wear or not wear a mask, you can’t follow them around making sure they are abiding by the study parameters, and you certainly can’t force them to do so if they live in a state or municipality that has legal requirements around masking.

The studies have been almost comedic

In contrast to our scientific ideal, the studies that recommend masks have been almost comedic in their participant selection. There is the study from Duke University on school masking that didn’t include any schools without mandatory masking. There is the study from the California Department of Health that relied on self-reporting conducted through a telephone survey. There is the Massachusetts study in which unmasked subjects are forced to endure testing even when they have no symptoms and then compared to masked subjects who are exempt from such testing. In normal, dispassionate times, these would quickly be recognized as selection problems so severe as to render the results void.

Despite such fatal flaws, studies like these often achieve a notorious popularity due to the need for after-the-fact rationalisation. The proper chain of events for a public health mitigation like this would be to study the mitigation in a randomised control setting, establish efficacy on a small scale, model the results to a larger scale, present the results with appropriately small intervals of error, win the trust of the public, and finally implement the policy. Instead, we are audience to a funhouse version of this in which we implemented policies based on faith, modelled the counter-factual so we would have some graphs to show the politicians, became confused when the results didn’t match the model, and issued poorly conducted studies to excuse the policy after losing the trust of the public.

Throughout this, there were some comprehensive studies conducted more in a spirit of discovery that found far less reliable results from these mitigations. A study conducted by Emily Oster’s team of data hounds examined rates of COVID among over a million students and found no correlation between rates of COVID in schools and mask mandates applied at either a state or district level. The European CDC has also conducted a wide-ranging review of the effectiveness of masks in community settings and determined the effect was “small to moderate” and the certainty of even this effect was “very low”.

This is where the models really fell flat on their faces. The early models left the impression that NPIs would provide a level of protection that would turn our heads. They winked at us knowingly and strongly suggested that we should be seeing clear signs to reinforce the promises they had made. We should have seen starkly visible differences between the regions, countries and municipalities that applied the NPIs when comparing them against the ones who did not.

If we set our expectations by the light of these models, we are left befuddled, looking at the results of the pandemic and seeing only marginal differences between regions that pushed heavy handed mitigations and those that endorsed a light touch. As seasonal COVID infections spread repeatedly across the world, we were led to expect that we would see the sheep and the goats separated by their attachment to these rituals of purity. But reality is muddy and indistinct. It takes our tidy expectations and plays us for fools.

The models deliver to us an unblemished world in which every action we take is important, every mitigation is effective, every restriction valuable in providing at least some level of relief and protection. This is a promise of control, which is particularly seductive to people in power and whose stock in trade is a vision of human authority over an unruly reality. In the end, this promise is nothing more than a tease. We are better off turning away from the models and abandoning their suggestions of a hundred and one marginal improvements for a better tomorrow.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover