A March lockdown was inevitable. Criticism of the decision since has been around weaknesses or inconsistencies in the models used to support that decision. These criticisms fall into two categories. The first is just silly: The models were wrong because the misery and death they predicted have not come to pass; the second deserves a little more attention: The models were poorly conditioned by the available data. Here I argue that the decision was sound, but that the presentation of the rationale was unfortunate, based as it was on unnecessarily complicated models.

Sloughing off the parachute of lockdown

The models on which the decision to go into lockdown were based were predicting misery and death in the event nothing was done. Complaining that these models were inaccurate because waaaay fewer people have died than predicted and because no-one who might be cured is being refused treatment is plain daft. Something was done. Most of the planet went collectively into the biggest medical intervention in the history of mankind with excruciating economic consequences.

Arguing that we should never have locked down because those predictions never came to pass is like arguing you never really needed that parachute because the landing was nowhere near as hard as everyone said it would be.

Mitigation’s misguided arithmetic

In my last article, I showed the first tentative steps on a path to modelling the spread of COVID and argued that simple arithmetic demonstrated the futility of attempting to get to high levels of immunity without overburdening the health service by containing the spread of the virus — the so-called “mitigation” strategy.

Adam Kucharski’s demonstration at the BMJ conference on COVID unknowns of the indistinguishability of COVID curves according to how they are suppressed

Mitigation was much discussed as a strategy, at least in Denmark and the UK, at the beginning of the crisis. This was the origin of “flatten the curve” slogan. In a mitigation strategy, the contagion is reduced, but not reversed (see below and my lay guide to the dynamics of disease), so the curve is flattened, but not floored. At some point, though, health services in both countries quickly started denying they’d ever suggested such a thing (maybe they finally got round to reading my early blog on the subject), and started claiming that their intention had the whole time to be suppress the contagion. Luckily for them, the curves look much the same (see figure).

But the simple arithmetic of my last article can bring us only so far. We can see that mitigation won’t work, not because we can’t contain the spread (though we don’t know that yet either), but because even if we can, it takes too long.

We also know from the simple arithmetic that if our infection rates are such that we’re roundly stuffed if we have around 10 hospitalizations or more per 100,000 a day for any significant length of time. But to see whether that’s a real possibility, we need some slightly more sophisticated math.

Cometh the hour, cometh the math

Only, however, slightly more sophisticated math. At least to answer the broad questions we faced in March. All the math you need is in my layman’s guide to the simplest epidemiological model, the Susceptible, Infected, Removed or SIR model. Why it’s enough, I’ll try and explain here.

If you take SIR at face value (we won’t, but for now) then it reveals several deeply important features of epidemiological spread, but the most important is this

  • If on average, infected people infect more than one other person before they’re no longer infectious (because they isolate, wind up in hospital or die) then the number of infected people will — at least to start — grow exponentially. (Eventually, you start running out of people to infect, but not before things get really ugly.)
  • If, on the other hand, they do not, then the contagion dies out

What does this mean and how seriously can we take it?

Exponential growth isn’t just very fast growth

They key to understanding what this means is to understand what exponential growth means mathematically, rather than just what it means in its everyday colloquial usage, i.e. just “really fast”.

Exponential growth means not only that something is growing, but also that the rate at which it’s growing is also growing. The upshot of this is that although things start out relatively calm, they very quickly spiral out of control.

Put a grain of rice on the first square of a chessboard. Two on the second, three on the third and so on. The last square has 64 grains of rice; there are a couple of thousand grains of rice on the board, about half a deciliter. This is a constant rate of growth.

Now instead of putting three on the third square, put four; and then eight on the fourth, sixteen on the fifth and so on. Now the rate of growth is also growing.

It starts off pretty slow, but by the time you get to the 11th square (see left), you have a stack about 5cm high and altogether the couple of thousand we had before. When you finish that row, that stack is around 2 m high. (The end of the next row is 40km high.)

In March, we were seeing a doubling of cases every 3–4 days in the UK. So think of each square as 3–4 days. It’s not hard to see that if you’ve gone from one eighth to quarter of your excess intensive care capacity in 3–4 days then it is only a question of a week or two before you’ve twice what you can cope with, and that man-size column of doom at the end of the second row is less than a month away.

How seriously should we take it given the shortcomings of the model?

When the UK and Denmark made their decision to lockdown (UK a couple of weeks later than DK), we could see what was happening in Italy and what happened in China. We could also see the characteristics of exponential growth in the statistics: case numbers were increasing by a constant factor every week, not by a constant number. So the clues were there.

Had we only our very simple model to rest on, we would have to scrutinize that model more carefully. SIR is built on a lot of, frankly, rather dubitable assumptions. We can either try and make a more sophisticated model or we can look at the assumptions behind the model and try to figure out what it can and can’t tell us.

Models for the moment

Many governments opted for more sophisticated models. Why shouldn’t they? The advanced models were there, together with the expertise to use them. The problem is that more advanced models require more data and a deeper understanding of the causal relations between components than we realistically had — for COVID-19 — in March.

Models are built in a three-way dialogue between objectives, decisions and objectives. We discussed the alignment of decisions and objectives in the last article. Now we would like to address more refined decisions and objectives, but — for now at least — we are constrained by the granularity, the quality and the relevance of the available data.

In the event, from a modelling point of view, it didn’t matter because the conclusions the more sophisticated models came to were insensitive to the information we didn’t really have and were the same for the very broad range of possible values the key parameters in the model may have taken.

But rhetorically, we’d have been much better served sticking to the simple, robust model, and following David Spiegelhalter’s recipe for communicating decisions made under uncertainty in evidence:

  • Be open about the uncertainty and the current resolution of the model
  • Discuss the alternatives relative to that uncertainty
  • Present what you are doing to refine the decision in the future.

Legitimate attacks on the provenance and relevance of data in, for example, the Imperial model (as well is illegitimate attacks on its author), sowed unnecessary doubt in the basis of the decision.

Making the most of the model of the moment

The trick is to focus on what the simple model can tell you — in this case that early exponential growth is real — rather than what it can’t (what it would take to get the reproduction number well down under 1).

The crux of the SIR model is the assumption that in a short period of time, the number of people infected is proportional to the number of people left to be infected and the number of people already infected.

It says nothing about who is infected, where they live, who they hang out with or how old they are; it says nothing about whether infected people are incubating, symptomatic (where they’re probably not so much out infecting other people) or asymptomatic, or how much asymptomatic carriers can transmit the virus. Not does it say anything about how long people hang out together, whether or not they’ve washed their hands, or the fact that sometimes we’re incredibly unlucky and we’re infected despite all precautions and other times we’re correspondingly lucky. (There are other problems with it, too. See my article on the pains of epidemiology)

More sophisticated models attempt to grapple with these issues — especially age stratification and the inhomogeneities in the way in which people interact, but they require a great deal of additional, specific data to condition reliably. At the same time, whether by a fairly simple inspection of the formulation of these more sophisticated models, by running them over a broad variety of parameters or by simply looking at the predictive success of SIR in a wide variety of cases, it becomes abundantly clear that if the reproduction number is clear of 1 and there are more than a handful of people infected then, structurally, SIR is a pretty good model.

What matters is that — very broadly — in the early phases of the epidemic, the rate at which people are infected is proportional to the number of people infected. If enough people are getting infected then it doesn’t matter so much who they are, who they hang out with and where exactly in the lifecycle of the infection they are, and the stochastic nature of interactions is lost in the large aggregation. All these effects impact the constant of proportionality in highly non-trivial (and time dependent) ways, but the gross behaviour, the fundamental proportionality between number of infected and infection is stable across all modelling assumptions. So exponential growth is a thing.

Decisions under uncertainty with necessarily inadequate models and inevitably insufficient data. Aka Politics, for short.

We know we can’t cope with exponential growth and that early exponential growth is an absolutely stable feature of every possible model where the reproduction number is comfortably over 1 and the contagion has taken hold. We know that the long, flat curve leading to herd immunity is just too long, even if we can at all exert that level of control.

Suppression is the only viable strategy.

What we don’t know, what simple models like SIR are never going to be able to tell us and what even the most sophisticated models will only be able to tell us in the light of substantially more data than we had in March, is what it will take to suppress the contagion.

We knew that the actions taken in Wuhan were sufficient and that the actions taken early on in northern Italy were not. So governments had to place their responses somewhere on that spectrum and weigh off the economic consequences of that placement against the spectre of failing to suppress.

In hindsight, fortune favoured governments who realized how bad it was getting (a quick look at Northern Italy was enough for that) and that failure to control the virus would lead to even worse economic consequences than what it would take to bring it under control (a few weeks of fairly draconian lockdown). The suppressed contagion proved relatively easy to keep down in the summer months and these countries were able to return to relatively vigorous domestic economic activity — at least until the onset of Autumn.

Now have a much better understanding and an overwhelming abundance of data on the dynamics of the spread of COVID-19. It is now we need to leverage the more sophisticated models to understand which interventions are most effective for their economic cost, and find a strategy that negotiates that tension. In the absence of a vaccine that actually prevents the spread of the virus (as opposed to the ones we have that ‘only’ limit its ill effects) and with even those vaccines a few months away, we still need that insight.

We didn’t need complicated models to take us into lockdown. But we’re sure going to need them to get us out again.

Mathematical modelling for business and the business of mathematical modelling. See stochastic.dk/articles for a categorized list of all my articles on medium.