The absolute basics of what you need to know about the mathematics of how the novel Coronavirus will spread and how we might stop it.

The novel Coronavirus is, well, novel, so to start with at least, no one is immune to it. Everyone is susceptible to start.

Some of us, perhaps many of us, will get sick, then we are infected. Some of us will recover and — for a while at least, develop immunity, some of us will die, either way we won’t get sick again; we have been removed from the pool of the susceptible. …


Image for post
Image for post

In my blog post “Disease Dynamics Distilled” I explain — without equations — the Susceptible-Infectious-Removed (SIR) compartmental model. This is the simplest model there is that still captures the basic dynamics of the spread of disease in a population without immunity. The model, and I hope the article, explain among other things exponential growth, peak infection and herd immunity.

But where that blog post was all about how a very simple model can provide enormous insight, this one is about how what enormous models are needed to provide even very simple insights.

Using the implicit assumptions and shortcomings of the SIR model as a starting point, I will describe some developments of SIR that form the basis of the models people are using to try to predict the spread of COVID-19 in the world today, with links to examples and foundational articles. …


Image for post
Image for post

OK, “Flatten the curve of armchair epidemiology” was very funny; and “Ten Considerations Before You Create Another Chart About COVID-19” makes some important points about avoiding both panic and indifference; as does Slate’s “Stop the epidemic of armchair epidemiology”.

But we armchair epidemiologists, we unsophisticated sirens of social media, Excel crusaders and lackadaisical luminaries of LinkedIn; we too have a role to play.

He who can does. He who can not teaches

(The section headings in this article are all taken from George Bernard Shaw’s Maxims for Revolutionists.)

Professional epidemiologists are busy building models to understand how quickly COVID-19 spreads, what measures will work, how long we will need them; how many hospital beds we will need and how many respirators. …


Image for post
Image for post

Cast your mind back to the early Spring, 2020. The COVID-19 virus has been identified and sequenced, the Wuhan outbreak is under control, but the virus has spread abroad. We have a handful of key parameters on hospitalization rates, mortality and contagiousness, inferred from Wuhan data, but these (both the data and the parameters) are rather uncertain.

The situation in Northern Italy is worsening daily and stories are emerging of hospitals refusing patients care because they are old or overweight and resources are prioritized for patients with better chances of survival. The first cases are appearing locally.

In this short series of articles, I will outline a how we can use mathematical modelling to frame key policy decisions. The focus here is on the deployment of models to make decisions and not so much on the models themselves. …


Image for post
Image for post

After publishing my probabilistic pitfalls article, my fourteen-year old son (who apparently follows me on LinkedIn; who knew?) came and told me I was the grumpy old man of risk and uncertainty. I had wittered on and on about what everyone was doing wrong, but made no effort to try and say what people should be doing instead. Fair comment, I thought. Here’s a go at putting that right.

1) Lookback

Image for post
Image for post
  • Always use uncertainty ranges in lookback analysis — also for probability plots and percentile plots. Not only will it help show whether deviations are the result of poor practice or just statistical deviations, they can also be used to surface more subtle biases such as over-confidence (as opposed to optimism) and vagueness or thresholding. …

Image for post
Image for post

A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again.

Harry Collins is an expert on experts. At the core of his approach to expertise is what he calls a “periodic table of expertise”, which not only describes a hierarchy of levels of specialist knowledge, but also the ability of individuals possessing these levels of understanding to distinguish levels of expertise in others and how they do so.

The most basic form of knowledge in Collins’ hierarchy is beer-mat knowledge — an ability to recite facts without being able to do anything with them, except perhaps in a game of Trivial Pursuit. With popular understanding — such as you might expect to glean from a show on Discovery Channel — an elementary conception of meaning might allow for rudimentary inferences. …


Image for post
Image for post

Summary list

  1. Lookback: No lookback or, what it sometimes worse, lookbacks that are so naive you’d have been better off without them. See this article.
  2. P99 and P1. These have no place in any volumetric methodology whatsoever. Ever. See this article.
  3. Weakest link and unwarranted independence: The practice of taking the smallest probability in a chain of dependent probabilities. An unnecessary oversimplification, as an account of dependencies between elements is simple and enormously fruitful.
  4. The use of 50% probability in the absence of any information. The problem with this is not that it’s wrong, but with the idea there is no prior information. What? …


Image for post
Image for post

Discussions around extreme percentiles abound. The oil and gas exploration community expends enormous effort exhausting itself over appropriate values for the 99th and 1st percentiles of the distributions that describe the uncertainty in the size of oil fields.

I argue here that these discussions are at best distracting and at worst directly value-eroding.

The core of my criticism

My primary protestations are two fold

  • These perilous percentiles are very little related to the important parameters of their distributions.
  • Extreme percentiles are extremely difficult to assess with any kind of accuracy

Relevance

Uncertain variables are often approximated or represented by normal or log-normal distributions. Variables that can be decomposed into additive (or multiplicative) factors will always look roughly normal (or log normal) close to the mean. …


Quantitative risk illuminates the levers that link decisions to value.

This article illustrates the construction of a simple stochastic model with a straightforward example that captures the key categories of risk.

Image for post
Image for post

My daily cycle is circumscribed by uncertainty. What sort of form am I in on the day? Which way is the wind blowing and how strong? Will I catch the lights? Will I bump into someone I know? Will the bike break down?

The parallels to project management are all too apparent. Project teams and their governance can be in better or worse form. …


Image for post
Image for post
Photo by Todd Trapani on Unsplash

Everything should be made as simple as possible, but not simpler

This quote, usually attributed to Einstein, is often used as an appeal to make a subject as easy as possible to understand, though of course not so easy that it becomes meaningless.

It’s not hard to understand why this quote is so often attributed to Einstein, whose theories are (rightly) famous for their breath-taking economy and the astonishing simplicity of their founding principles. The quote is also exactly the kind of rhetorical ear-candy for which Einstein is known, short and informal, yet affecting a casual profundity with its little antithetical twist. …

About

Graeme Keith

Mathematical modelling for strategy & risk management

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store