Black Swan. Red Herring.

Black Swans call for a radical reappraisal of the way we model, but we are just as deluded about their significance as we are about our ability to explain and predict them.

Graeme Keith
6 min readMay 11, 2021

--

According to Nassim Nicholas Taleb, the author of the highly influential book “The Black Swan”, a Black Swan event is characterized as follows:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

I stop and summarize the triplet: rarity, extreme ‘impact’, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

Black Swans are defined as much by features of our discourse about events as by the events themselves. This is particularly true of the third member of this triplet, which is the source of much of the misuse of the black swan concept: retrospective, but not prospective, predictability. Much hinges on whether we believe our inability to model Black Swan events is because they are inherently outside the scope of any modelling or because we are just rather limited in how we think about modelling.

Many authors and pundits take it to mean the former. Black Swans are categorically unpredictable, and our retrospective rationalization is delusional. As such, and given that “almost everything in our world” is explained by “a small number of Black Swans”, they argue we may as well throw in the towel on modelling altogether.

If, on the other hand, modelling naivete is all that prevents the successful anticipation and management of Black Swan events, then the occurrence of Black Swans is a roaring wake-up call to learn from the predictive failures of the past and do everything we can to improve our forecasting of the future.

Black Swan Ascendancy

Photo by David Clode on Unsplash

The argument that Black Swans call for a complete capitulation of our modelling aspirations hangs on the belief that Black Swans are exclusively responsible for everything interesting that happens in the world.

Ironically, this belief in the ascendancy of Black Swan events is itself the product of the same narrative fallacy that Taleb persuasively argues is responsible for deluding us into believing we can explain past Black Swans, and thereby model them in the future.

Taleb describes how we retrospectively look back at events and construct inevitable narratives of causal contingency by which such events were bound to transpire. As Taleb writes

…narrativity causes us to see past events as more predictable, more expected, and less random than they actually were

But, I would argue, this is exactly what we are doing when we look back at the complex unfolding of history, with its dense, tangled webs of influence, and ascribe its macroscopic trends to the exclusive agency of a handful of epoch-making Black Swan events. We are at least as deluded about the significance of Black Swan events as we are about our ability (or otherwise) to explain and predict them.

The poverty of our modelling paradigms

Photo by Amol Tyagi on Unsplash

To claim as illusory the belief that Black Swan events explain almost everything is in no way to claim they aren’t important, nor does it defend our persistent inability to model them. I argue, however, that it does bring within reach the aspiration of using models to leverage our intelligence and experience to anticipate, understand and manage the critical risks we face. And it intensifies the urgency with which we must address the shortcomings of our current modelling paradigms. This is the most valuable legacy of the Black Swan concept.

Taleb himself is articulate and exhaustive in his enumeration of the litany of inadequacies in our current modelling practice. They fall broadly into two categories

Model inadequacies

  • The inadequacy of Gaussian models, models based on Gaussian distributions (hereunder Brownian motion models including Black-Scholes), mean / variance models and variance as a solo risk metric.
  • The inadequacy of archetypical models of uncertainty (coins, dice, card games) to capture characteristics of real-life uncertainty (ludic fallacy).
  • The dominance of deterministic models and the blind faith with which we believe in them to the point of not even checking to see if they work.
  • The failure adequately to account for structural features of the dynamics of complex systems like bifurcations to instability and chaotic behaviour
  • The failure to situate models in their broader contextual trends and understand their relationship to the assumptions in our simple models.

Epistemological inadequacies and biases

  • We carefully select what we do model by what we can model, and we generalize from that to what we can’t.
  • We confirm our models with data they were built to explain and ignore everything that speaks against them (confirmation fallacy)
  • We find spurious patterns in randomness
  • We attribute causality to contingency (narrative fallacy)

If we are to achieve our modelling aspirations we must learn to explicate what our models can and can not reasonably tell us. We must examine and test our modelling assumptions, recognize their limitations and appreciate the trade-off we make between faithful representation and computability. And we must test the predictions of our models against outcomes.

We must build uncertainty into our models and make uncertainty part of the process by which we use models to make decisions. In that process, we must align our categories of response, intervention and control with what we can reasonably know and what we can not.

And where the underlying dynamics of the systems we interrogate entirely preclude prediction, we must broaden our notion of modelling: Learn to utilize the qualitative and relational insights our models do give us, look to our vulnerabilities and broaden the categories of both the threats we consider and our potential responses to them.

Finally, in our humility about what we can learn from our models, we must connect with other disciplines to bring broader perspectives of understanding to bear and understand how they align with our modelling insights.

Conclusion

These are the discussions we should be having. Instead, we seem endlessly stuck discussing whether such and such an event was or was not a Black Swan and whether the existence of Black Swan events precludes our ability to model them, as if they weren’t defined by our ability to model them in the first place.

It is in this sense that I claim the Black Swan is a red herring. The book, the concept and the ensuing debate have been enormously valuable in broadening public awareness of the mathematical and epistemological shortcomings in our modelling and the consequent need for new, broader paradigms to meet these challenges. But Black Swans are neither as omnipotent or as inscrutable as they are often portrayed and the concept has become a distraction. It deludes us into defeatism, instead of animating us to do better.

--

--

Mathematical modelling for business and the business of mathematical modelling. See stochastic.dk/articles for a categorized list of all my articles on medium.