How a causal framework can help you avoid the Monty Hall trap (and going home with a goat)

Image for post
Image for post

The Monty hall problem became famous when Marilyn vos Savant published it in response to a reader’s question in her column in Parade magazine in 1990. It is as famous for the virulence with which Marilyn was mansplained as it is for the subtlety of the problem itself. Even the legendary Paul Erdos, one of the great names of 20th century mathematics was only finally convinced by a computer simulation.

This article will show how a causal framework can help elucidate the structure of the problem, which I will argue helps avoid the logical pitfalls that makes the correct answer so intuitively uncomfortable. …


Image for post
Image for post
Photo by Florian Schmetz on Unsplash

There’s more to a model than the fidelity of its forecast

Whenever a model appears to you as the only possible one, take this as a sign that you have neither understood the model nor the problem which it was intended to solve
Karl Popper, Philosopher (1902–1994)

If models are cheap, don’t choose. The more models the merrier. But we mustn’t waste our time on rubbish and there’s more to a good model than the accuracy of its predictions.

What is a good model?

At catastrophic economic cost, governments across the world have imposed various levels of lockdown on the basis of results of a handful of mathematical models. …


Models are nothing. Modelling is everything.
Sam Savage

This is a toy model of the spread of infection in a playground. It’s designed to explain features of serious models of the global epidemic and (next time) to show phenomena that these features predict that are not captured in simple models.

This is not a COVID model; I’m not making predictions. The simple setting makes it easier to explain what real world models are trying to capture and to demonstrate the how phenomena we see in the evolution of the pandemic numbers can be ascribed to relatively simple infection dynamics.

It is the model I used to produce the figures used in my “Pains of epidemiology” article, to which this blog acts as a slightly nerdy companion piece. …


A March lockdown was inevitable. Criticism of the decision since has been around weaknesses or inconsistencies in the models used to support that decision. These criticisms fall into two categories. The first is just silly: The models were wrong because the misery and death they predicted have not come to pass; the second deserves a little more attention: The models were poorly conditioned by the available data. Here I argue that the decision was sound, but that the presentation of the rationale was unfortunate, based as it was on unnecessarily complicated models.

Sloughing off the parachute of lockdown

Image for post
Image for post

The models on which the decision to go into lockdown were based were predicting misery and death in the event nothing was done. Complaining that these models were inaccurate because waaaay fewer people have died than predicted and because no-one who might be cured is being refused treatment is plain daft. Something was done. Most of the planet went collectively into the biggest medical intervention in the history of mankind with excruciating economic consequences. …


Image for post
Image for post

Cast your mind back to the early Spring, 2020. The COVID-19 virus has been identified and sequenced, the Wuhan outbreak is under control, but the virus has spread abroad. We have a handful of key parameters on hospitalization rates, mortality and contagiousness, inferred from Wuhan data, but these (both the data and the parameters) are rather uncertain.

The situation in Northern Italy is worsening daily and stories are emerging of hospitals refusing patients care because they are old or overweight and resources are prioritized for patients with better chances of survival. The first cases are appearing locally.

In this short series of articles, I will outline a how we can use mathematical modelling to frame key policy decisions. The focus here is on the deployment of models to make decisions and not so much on the models themselves. …


Image for post
Image for post

After publishing my probabilistic pitfalls article, my fourteen-year old son (who apparently follows me on LinkedIn; who knew?) came and told me I was the grumpy old man of risk and uncertainty. I had wittered on and on about what everyone was doing wrong, but made no effort to try and say what people should be doing instead. Fair comment, I thought. Here’s a go at putting that right.

1) Lookback


Image for post
Image for post

A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again.

Harry Collins is an expert on experts. At the core of his approach to expertise is what he calls a “periodic table of expertise”, which not only describes a hierarchy of levels of specialist knowledge, but also the ability of individuals possessing these levels of understanding to distinguish levels of expertise in others and how they do so.

The most basic form of knowledge in Collins’ hierarchy is beer-mat knowledge — an ability to recite facts without being able to do anything with them, except perhaps in a game of Trivial Pursuit. With popular understanding — such as you might expect to glean from a show on Discovery Channel — an elementary conception of meaning might allow for rudimentary inferences. …


Image for post
Image for post

Summary list

  1. Lookback: No lookback or, what it sometimes worse, lookbacks that are so naive you’d have been better off without them. See this article.
  2. P99 and P1. These have no place in any volumetric methodology whatsoever. Ever. See this article.
  3. Weakest link and unwarranted independence: The practice of taking the smallest probability in a chain of dependent probabilities. An unnecessary oversimplification, as an account of dependencies between elements is simple and enormously fruitful.
  4. The use of 50% probability in the absence of any information. The problem with this is not that it’s wrong, but with the idea there is no prior information. What? …

Image for post
Image for post

Discussions around extreme percentiles abound. The oil and gas exploration community expends enormous effort exhausting itself over appropriate values for the 99th and 1st percentiles of the distributions that describe the uncertainty in the size of oil fields.

I argue here that these discussions are at best distracting and at worst directly value-eroding.

The core of my criticism

My primary protestations are two fold

  • These perilous percentiles are very little related to the important parameters of their distributions.
  • Extreme percentiles are extremely difficult to assess with any kind of accuracy

Relevance

Uncertain variables are often approximated or represented by normal or log-normal distributions. Variables that can be decomposed into additive (or multiplicative) factors will always look roughly normal (or log normal) close to the mean. …


Quantitative risk illuminates the levers that link decisions to value.

This article illustrates the construction of a simple stochastic model with a straightforward example that captures the key categories of risk.

Image for post
Image for post

My daily cycle is circumscribed by uncertainty. What sort of form am I in on the day? Which way is the wind blowing and how strong? Will I catch the lights? Will I bump into someone I know? Will the bike break down?

The parallels to project management are all too apparent. Project teams and their governance can be in better or worse form. …

About

Graeme Keith

Mathematical modelling for strategy & risk management

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store