How high functioning in other intellectual disciplines can be an obstacle to learning mathematics

In a famous and controversial public lecture in 1959, the scientist and author C. P. Snow lamented the increasing polarization of western intellectual activity into two distinct “cultures”: science, engineering and mathematics on the one hand, and what he called literary humanities on the other. Snow argued that the phenomenon was particularly pernicious, because the ranks of the political class were increasingly drawn from the latter, leading to an under-representation of scientific insight in the ruling elite — a predicament cast into vivid contemporary relief in the light of diverse national responses to the unfolding of the COVID-19 pandemic.

Snow…


Photo by Nik Albert on Unsplash

A layman’s look at the foundations of Frequentism and Bayesianism and how you can have the best of both.

What is probability? We often hear that there are two schools of thought, Bayesian and Frequentist; that Frequentists believe in an objective measure of probability, which we can only access through large numbers of repeated experiments; and that Bayesians, on the other hand, hold that there is no objective measure of probability, but rather that probability is an inherently subjective measure of belief, which we can only refine and modify with data.

As I discuss in my article “Why probability is hard”, to the pupil of probability, this is all rather unsatisfactory. What about when you can’t repeat an experiment…


Photo by Tom Pumford on Unsplash

It’s not because you’re stupid or weren’t concentrating in school

In 1982, Kahnemann, Slovic and Tversky published “Judgement under uncertainty: Heuristics and biases” and shattered humanity’s collective self-delusion that we had any functional intuition for even the most rudimentary problems in probability theory. This work has seen a renaissance in popularity since the publication of Kahnemann’s rather more accessible “Thinking fast and slow”.

Kahnemann is broadly sympathetic to our struggles, but much of the follow-up literature and course material has a slightly disparaging, not to say patronizing odour, as if reliable probabilistic intuition is just a question of a little hard work and application.

I have taught probability theory to…


Only one of them is actually a book on leadership

Photo by j zamora on Unsplash

The five books that taught me most about leadership are

  1. On Grand Strategy by John Lewis Gaddis
  2. War and Peace by Leo Tolstoy
  3. The Open Society and its Enemies by Karl Popper
  4. Talleyrand by Duff Cooper
  5. Middlemarch by George Eliot

By leadership, I mean the act of forging and articulating a vision and then engaging and motivating individuals in its execution. …


Author: Farcaster at English Wikipedia (Wikimedia Commons)

The prevailing paradigm of data analytics is rooted in the philosophy of Logical Positivism. Institutions who do not primarily deal with data need a more flexible approach.

The prevailing paradigm of data analysis is buried deep in the arid soil of a philosophical school called Logical Positivism. The framework is characterized by three mythical tenets derived from Positivism

  • Data are the only natural point of departure for all analysis
  • We ensure objectivity in our models by gathering data impartially
  • Data “speak”. That is, they guide our minds in the construction of models.

As I discuss in my article “Myths of Modelling: Data Speak”, Positivism — and, by association, its mythical beliefs — had been pretty thoroughly discredited by the 1960s. Unfortunately, as if often the case…


Thank you so much for your thoughtful and thought-provoking commentary.

It was absolutely not my intention to dismiss attempts to refute theories, only that we shouldn't (and don't) *only* try to refute theories, that confirmation - and other criteria - can have a place in an open contest between rival theories, but also that refutation is harder than we might think.

I left Kuhn out, because the article was already long enough, but Kuhn is clearly essential in all this. I'd argue, though, that Kuhn, like Popper, is most persuasive in his rejection of the positivist view that science ought…


Photo by Gary Chan on Unsplash

We do not verify models by repeated attempts to falsify them, nor should we try; armed with causality and Bayesian probability we can do better.

The myth

The myth of falsification has two versions

  • Science progresses through repeated attempts to falsify theories, conjectures or hypotheses. This is the descriptive myth.
  • Science ought to progress through repeated attempts to falsify theories, conjectures or hypotheses. This is the normative myth.

The normative myth of falsification is half of a widely preached doctrine of scientific practice together with the positivist myth of the primacy and objectivity of data— the idea that data are the starting point for all analysis and that we ensure objectivity in our models by gathering data impartially and “letting the data speak”.

According to this…


Cryteria, CC BY 3.0 <https://creativecommons.org/licenses/by/3.0>, via Wikimedia Commons

Data keep us honest, but they don’t speak; they aren’t objective, and they are never free from the taint of theory.

The myth

The myth of speaking data has many related forms

  • Data are the natural starting point for investigation and analysis, and one should approach data pure in mind and cleansed from all taint of theory
  • Data are objective, or at least somehow more objective than models, which without data are just story-telling with a strong odour of opinionation.
  • We ensure objectivity in our models by impartially gathering data and then letting the data “speak”.
  • Data objectively guide our minds in the construction of impartial models, motivated only by “the facts”

This is the myth of the primacy and probity of…


How to show what subjective scores do and do not tell you about probability of success on the basis of their past performance

Your business involves repeatedly predicting the outcome of a gamble — the success of acquisitions, the return from investments, the repayment of loans — and you have subjective assessments of similar gambles that you or others have made in the past in the form of some sort of score.


What does it mean that a probability is “correct” and how could you possibly know?

Mathematical modelling of uncertainty stands or falls on our ability to assess probabilities, but how do you know if your assessments are any good and what does that even mean? This article takes a pragmatic approach to answering these questions. We’ll start in the shadows of the valley of frequentism, step out on to the Bayesian foothills of subjectivism and stride on to the summits of pragmatism and an objective form of the Bayesian view.

From this standpoint, we will see what can go wrong with probabilistic assessments and discuss systematic biases. Finally, the pragmatist interpretation will bring us to…

Graeme Keith

Mathematical modelling for business and the business of mathematical modelling. See stochastic.dk/articles for a categorized list of all my articles on medium.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store