Emerging from the muddle of matrices

Graeme Keith
7 min readNov 3, 2020

You are exasperated by the ambiguity of subjective semi-quantitative scoring; infuriated by ranges that are too wide to discriminate, yet too narrow to capture the panorama of possible outcomes.

You have read my last article.

You have now adopted a single frequency as your measure for whether and how often a risk event occurs, and chosen to represent impact with economic penalty averaged over that full panorama of possible outcomes.

You know these assessments are uncertain. However, they are no longer hidden behind the thick stone columns of a monolithic matrix and they can be audited. You can examine how wrong they might be and you can analyse how that uncertainty affects decisions.

Collateral benefit

Even before you get to prioritization, you have discovered that this shift of emphasis has fundamentally changed the nature of your discussions around risk. Now there is a correspondence between the parameters you elicit and quantities that can be derived from past data and assessed against future performance. Slugging it out over subjective assessments has given way to earnest examination of what you can easily extract from evidence.

For rare events, where the data are too few to provide a statistical basis, you are analyzing the confluences of causes that lead to events, and unpacking their consequences, assessing each against what the contributing professionals have seen in the company and in the industry at large.

Representation and prioritization

For the time being, we are concerned with risks that are actuated by the occurrence of certain events. We will defer inevitable uncertainties, such as commodity prices, currency rates and so on to the next article.

The core of your company’s business plan is a financial model that makes projections that honour the targets set out in the business plan. If the kind of risk event we’re analyzing here occurs then it will hit this plan through direct reductions in income or direct costs — either way it will hit the bottom line with an absolute value, as opposed to reducing it by a certain factor (such risks can also be modelled, but we’ll stick with these bottom line, additive, event risks for now).

To understand the impact of such an event on the bottom line, we need to know how often it might occur in a given time period and what the range of impact could be each time it occurs. In fact, the frequency we are now eliciting is exactly the number of times a year, on average, we expect this event to occur. The impact parameter we plot is the economic impact we suffer averaged over the panorama of possible outcomes of the event occurring. We plot these in the risk plane I introduced last week.

Intuitively we expect the product of these two numbers to be the average impact we can expect from events of this kind on a yearly basis. And indeed, mathematically (subject to a couple of reasonable conditions) this is true.

Now, our vertical axis is frequency, but because the scale is logarithmic, the distance from the horizontal axis is actually the logarithm of the frequency. Similarly, the distance on the impact axis is the logarithm of the expect economic impact. If we add these two logarithms together we get the logarithm of their product. So the straight lines on the figure here are lines along which the total expected loss per annum is constant.

This average impact per annum is not at all a bad place to start in terms of prioritizing risks, and assessing the benefit per unit cost of interventions, at least in a relative sense, between risks. It has its most tangible interpretation for high frequency events, where it is reasonable to expect to see actual annual impacts tend, on average, towards the predicted value.

The virtue of variance

To get a reasonable hold on the average impact when an event occurs, we need to work with the full range of outcomes: understand how wild the consequences might be, but also how mild, and get a sense of the shape of the distribution between. There are number of methods to assist with this that I will come back to in later articles, but for now we assume we have a sense of a high (say a 90th percentile, a P90) and low value (P10) and enough information about the distribution that we can calculate a variance.

We can plot these ranges in our plane, as shown here. Such a representation contains a wealth of information: How often risk events occur and their average impact, as well as how bad those impacts can get and the lower end of the range that illustrates how the mean sits in the full range of outcomes. The diagonal location of the mean, as marked off with the dashed white lines, gives a sense of the overall severity of the risk, at least from an average annual impact perspective.

But we can do even better than this.

More meaningful metrics

Recognizing that average impact per year is less than ideal metric for making decisions regarding high impact, low frequency risks, we may prefer to make decisions regarding the management of our risks based on minimizing the impact we can expect in a 100-year perfect storm of risk misery, i.e. a 100 year risk storm. We may also decide to minimize the probability of wiping out our entire operating profit in a given year. Less extreme, we might want to minimize the 10-year storm or minimize the probability of failing to meet the profit-margin target we gave our investors.

Such metrics — impact at 1% or 10% probability, probability of wiping out cashflow, probability of failing to meet target — are incredibly powerful metrics and drive highly focussed discussions regarding the significance of risk reduction measures on a company’s exposure. But the holistic picture that allows these discussions requires insight into the overall effect of every risk. Luckily you already did the hard work to enable this when you switched to frequencies and outcome ranges.

We’ve already assumed our risks our additive, so to get the combined impact of all these risks, we just need to add the uncertain variables represented by the frequency and impact we’ve plotted above. The simplest way to do this is with a Monte Carlo simulation.

Monte Carlo and meaningful metrics

A Monte Carlo simulation is essentially a game where we play our risks on a yearly basis a very large number of times, say 100,000. We start with a set of risks assessed in the risk plane as shown.

In game year one, we look at each of these risks in turn. We ask first how many times each of them occurs in this game year — essentially rolling a die that is carefully weighted so the number that comes up corresponds over a large number of rolls to the given frequency with the correct distribution. Then for each time the risk event takes place, if it takes place, we ask how bad was the impact, now rolling a new kind of weighted die with a very large number of sides representing a continuum of impact. Then we add all the impacts from all the events for that game year and start on game year 2.

After 100,000 game years (which takes about 5 minutes to get through), we’re ready to start asking questions like what does a bad year look like? If I write down the overall impact from all the risks in an ordered list, how bad is the 90,000th impact (this is the 90th percentile, broadly speaking a 10 year storm); what about the 99,000th (100 year storm). I can see where in the list the impact wipes out my target and where in the list I lose my shirt.

Typically we plot all the outcomes in order as shown here. We can quickly read off median impacts (50,000th in the ordered list of 100,000 shown with the dashed grey line — in this case a little over 2000 sickles), P90 (90,000th — 10,000 sickles) and P99 (99,000th — 200,000 sickles).

We can set thresholds and see the probability of meeting them. Say our operating profit is about 50,000 sickles. The red line shows where in the ordered list of outcomes that lies — about 97% along the list, from which we infer that the probability of our risks conspiring to wipe out our profit, or worse, is 3%.

Likewise, we can also read off the probability of meeting the target we’ve set ourselves for the year. In this case, the yellow line, which comes in at a little under 10%.

Next time

We’ll get deeper in to the power of this approach next time. Explore what we can do with it and also discuss simple methods of getting pretty close to the full Monte Carlo result without resorting to a Monte Carlo simulation. We’ll also talk about inevitabilities. Like tax.

--

--

Graeme Keith
Graeme Keith

Written by Graeme Keith

Mathematical modelling for business and the business of mathematical modelling. See stochastic.dk/articles for a categorized list of all my articles on medium.

No responses yet