Constructive dissent

I am currently reading “Thinking in Bets” by Annie Duke. It is early days but I suspect that this is a book that has some useful things to say about creating the kinds of corporate culture that truely reflect the values espoused in corporate mission statements. It is a truth that actions speak louder than words and she cites a practice employed by the American Foreign Service Association which has not one but four awards for employees who have exhibited behaviours that demonstrate initiative, integrity, intellectual courage and constructive dissent.

The attached quote comes from the AFSA website setting out the criteria employed for these awards

Criteria for the Dissent Awards

The awards are for Foreign Service employees who have “exhibited extraordinary accomplishment involving initiative, integrity, intellectual courage and constructive dissent”. The awards publicly recognize individuals who have demonstrated the intellectual courage to challenge the system from within, to question the status quo and take a stand, no matter the sensitivity of the issue or the consequences of their actions. The issue does not have to be related to foreign policy. It can involve a management issue, consular policy, or, in the case of the recently established F. Allen “Tex” Harris Award, the willingness of a Foreign Service Specialist to take an unpopular stand, to go out on a limb, or to stick his/her neck out in a way that involves some risk

https://www.afsa.org/constructive-dissent-awards

When safety proves dangerous …

… is the title of a post on the Farnham Street blog that provides a useful reminder of the problem of “risk compensation”; i.e. the way in which measures designed to make us safer can be a perverse prompt for us to take more risk because we feel safer. I want to explore how these ideas apply to bank capital requirements but will first outline the basic ideas covered by Farnham Street.

we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation.

The classic example is car safety measures such as improved tyres, ABS braking systems, seat belts and crumple zones designed to protect the driver and passengers. These have helped reduce car fatality rates for the people inside the car but not necessarily reduced accident rates given that drivers tend to drive faster and more aggressively because they can. Pedestrians are also at greater risk.

Farnham Street suggests the following lessons for dealing with the problem risk compensation:

  1. Safety measures are likely to be more effective is they are less visible
  2. Measures designed to promote prudent behaviour are likely to be more effective than measures which make risky behaviour safer
  3. Recognise that sometimes it is better to do nothing if the actions we take just leads to an offset in risk behaviour somewhere else
  4. If we do make changes then recognise that we may have to put in place other rules to ensure the offsetting risk compensating behaviour is controlled
  5. Finally (and a variation on #3), recognise that making people feel less safe can actually lead to safer behaviour.

If you are interested in this topic then I can also recommend Greg Ip’s book “Foolproof” which offers a good overview of the problem of risk compensation.

Applying these principles to bank capital requirements

The one area where I would take issue with the Farnham Street post is where it argues that bailouts and other protective mechanisms contributed to scale of the 2008 financial crisis because they led banks to take greater risks. There is no question that the scale of the crisis was amplified by the risks that banks took but it is less obvious to me that the bailouts created this problem.

The bailouts were a response to the problem that banks were too big to fail but I can’t see how they created this problem; especially given that the build up of risk preceded the bailouts. Bailouts were a response to the fact that the conventional bankruptcy and restructure process employed to deal with the failure of non-financial firms simply did not work for financial firms.

It is often asserted that bankers took risks because they expected that they would be bailed out; i.e/ that banks deliberately and consciously took risk on the basis that they would be bailed out. I can’t speak for banks as a whole but I have never witnessed that belief in the four decades that I worked in the Australian banking system. Never attribute to malice what can be equally explained by mistaken beliefs. I did see bankers placing excessive faith in the economic capital models that told them they could safely operate with reduced levels of capital. That illusion of knowledge and control is however a different problem altogether, largely to do with not properly understanding the distinction between risk and uncertainty (see here and here).

If I am right, that would suggest that making banks hold more capital might initially make them safer but might also lead to banks looking for ways to take more risk. This is a key reason why I think the answer to safer banks is not just making them hold higher and higher levels of common equity. More common equity is definitely a big part of the answer but one of the real innovations of Basel 3 was the development of new forms of loss absorbing capital that allow banks to be recapitalised by bail-in rather than bail-out.

If you want to go down the common equity is the only solution path then it will be important to ensure that Farnham Street Rule #4 above is respected; i.e. bank supervisors will need to ensure that banks do not simply end up taking risks in places that regulation or supervision does not cover. This is not a set and forget strategy based on the idea that increased “skin in the game” will automatically lead to better risk management.

Based on my experience, the risk of common equity ownership being diluted by the conversion of this “bail-in” capital is a far more effective constraint on risk taking than simply requiring banks to hold very large amounts of common equity. I think the Australian banking system has this balance about right. The Common Equity Tier 1 requirement is calibrated to a level intended to make banks “Unquestionably Strong”. Stress testing suggest that this level of capital is likely to be more than sufficient for well managed banks operating with sensible risk appetites but banks (the larger ones in particular) are also required to maintain a supplementary pool of capital that can be converted to common equity should it be required. The risk that this might be converted into a new pool of dilutive equity is a powerful incentive to not push the boundaries of risk appetite.

Tony – From the Outside

Navigating a radically uncertain world

The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.

This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.

No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.

It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.

Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.

However, there were two things I found especially interesting in their analysis.

  • One was the argument that the “biases” catalogued by behavioural finance were not necessarily irrational when applied to a radically uncertain world.
  • The other was the emphasis they place on the idea of employing abductive reasoning and reference narratives to help navigate this radically uncertain future.

Behavioural Finance

Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.

Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…

…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.

I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.

Abductive Reasoning and Reference Narratives

The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.

If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.

The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.

Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Reference Narratives

If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.

Good and Bad Reference Narratives

Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.

Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.

Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,

…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.

Using Reference Narratives

Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).

Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..

Scenario planning is a way of ordering thoughts about the future, not of predicting it.

The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.

Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future

We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.

Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.

‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’

Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives

These extracts from their book nicely captures the essence of their argument

Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.

the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.

Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.


The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.

robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.

Summing Up

Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.

I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.

Tony – From The Outside

Why we fail to prepare for disasters

Tim Harford (The Undercover Economist) offers a short and readable account here of some of the reasons why, faced with clear risks, we still fail to act. We can see the problem, typically one of many, but don’t do enough to manage or mitigate the risk. New Orleans’ experiences with severe weather events features prominently as does (not surprisingly) COVID 19.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

Why we fail to prepare for disasters; Tim Harford (The Undercover Economist)

Another big part of the problem is that the cost of being fully prepared can be more than we are willing to pay. Especially when there is continuous pressure to find cost economies in the here and now

Serious scenarios are useful, but … no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

So the key points I took from his post:

  • Sometimes it can be something genuinely new and unexpected (i.e. Black Swan events) but risks we are well aware of can be equally damaging
  • Part of the problem is that we are social animals and take our cues from what the rest of the herd is doing (“normalcy bias” or “negative panic”)
  • Even where we understand the statistics and know that someone will be impacted, we tend to assume it will be someone else or someone else’s family (“optimism bias”)
  • We are especially bad at understanding risks that have an exponential driver (“exponential myopia”)
  • We are also quite good at finding reasons to justify ignoring risks we want to ignore or otherwise find inconvenient (“wishful thinking”)
  • Last, but far from least, efficiency is the enemy of resilience.

We need to remember that most of the factors listed above can also be useful in many other contexts (arguably most of the time). A tendency not to panic can be pretty useful and optimism has helped dreamers and ordinary people achieve many great things that have benefited the herd. Efficiency as a rule seems like a good thing to strive for.

Harford does not offer any easy answers but his post touches on issues that I have also been considering in Kay and King’s book titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. I have done a couple of posts on that book already (here and here) and am working on a final one that focuses on Chapters 8-16 which set out their ideas for how we navigate a world prone to radical uncertainty.

Tony – From the Outside

The why of Radical Uncertainty

A recent post offered an overview of a book by John Kay and Mervyn King titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. It is a rich topic and this post covers the underlying drivers that tend to result in radically uncertain outcomes.

Kay and King nominate “reflexivity” as a key driver of radical uncertainty

The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers … although their perspective on the problem and its solution would be very different.

Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.

Kay and King, Chapter 3: Radical Uncertainty is Everywhere”

Radical uncertainty also features in Richard Bookstaber’s book “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. Bookstaber identifies four broad phenomena he argues are endemic to financial crises

Emergent phenomena.
“When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.

Non-ergodicity.
“An ergodic process … is one that does not vary with time or experience.
Our world is not ergodic—yet economists treat it as though it is.”

Radical uncertainty.
“Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”

Computational irreducibility.
“There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.

Bookstaber, Chapter 2: Being Human

If you want to delve into the detail of why the world can be radically uncertain then Bookstaber arguably offers the more detailed account; albeit one couched in technical language like emergent phenomena, ergodicity and computational irreducibility. In Chapter 10 he lays out the ways in which an agent based modelling approach to the problem of radical uncertainty would need to specify the complexity of the system in a structured way that takes account of the amount of information required to describe the system and the connectedness of its components. Bookstaber also offers examples of emergent phenomena in seemingly simple systems (e.g. Gary Conways’s “Game of Life”) which give rise to surprisingly complex outcomes.

I am not sure if either book makes this point explicitly but I think there is also an underlying theme in which the models that provide the illusion of control over an uncertain future create an incentive to “manage” risk in ways that increases the odds of bad outcomes based on insufficient resilience. That seems to be the clear implication of Kay and King’s discussion of the limits of finance theory (Chapter 17: The World of Finance). They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama but highlight the ways in which it has failed to live up to its promiseI .

We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.

Kay and King, Chapter 17 The World of Finance

I plan to do one more post exploring the ways in which we navigate a world of radical uncertainty.

Tony (From the Outside)

Probabilities disguising uncertainty – Part II

This behavior makes one blind to all sorts of things. 

The first blind spot … is that it treats uncertain events – items of unknowable incidence and severity – as if they were risks that could be estimated probabilistically. 

Epsilon Theory ; “Lack of Imagination” 14 March 2020

One of my recent posts drew attention to an article by John Kay promoting a book he has co-written with Mervyn King on the topic of “radical uncertainty”. Epsilon Theory offers another useful perspective on the ways in which extending probabilistic thinking beyond its zone of validity can cause us to miss the big picture.

The Epsilon Theory post focusses on the Covid 19 fallout currently playing out but is also worth reading for the broader challenges it offers anyone trying to use models and probabilities to manage real world outcomes …

Tony

The “skin in the game” argument for more common equity

One of the traditional arguments for higher common equity requirements is that it increases the shareholders’ “skin in the game” and thereby creates an incentive to be more diligent and conservative in managing risk.

This principle is true up to a point but I believe more common equity mostly generates this desirable risk management incentive when the extra skin in the game (aka capital) is addressing a problem of too little capital. It is much less obvious that more capital promotes more conservative risk appetite for a bank that already has a strong capital position.

In the “too little” capital scenarios, shareholders confronted with a material risk of failure, but limited downside (because they have only a small amount of capital invested), have an incentive to take large risks with uncertain payoffs. That is clearly undesirable but it is not a fair description of the risk reward payoff confronting bank shareholders who have already committed substantial increased common equity in response to the new benchmarks of what it takes to be deemed a strong bank.

The European Systemic Risk Board published some interesting research on this question in a paper titled “Has regulatory capital made banks safer? Skin in the game vs moral hazard” . I have copied the abstract below which summarises the key conclusions.

Abstract: The paper evaluates the impact of macroprudential capital regulation on bank capital, risk taking behaviour, and solvency. The identification relies on the policy change in bank-level capital requirements across systemically important banks in Europe. A one percentage point hike in capital requirements leads to an average CET1 capital increase of 13 percent and no evidence of reduction in assets. The increase in capital comes at a cost. The paper documents robust evidence on the existence of substitution effects toward riskier assets. The risk taking behavior is predominantly driven by large and less profitable banks: large wholesale funded banks show less risk taking, and large banks relying on internal ratings based approach successfully disguise their risk taking. In terms of overall impact on solvency, the higher risk taking crowds-out the positive effect of increased capital.

I have only skimmed the paper thus far and have reservations regarding how they measure increased risk. As I understand it, the increased riskiness the analysis measures is based on increases in average risk weights. It was not clear how the analysis distinguished changes in portfolio riskiness from changes in the risk weight measure. That said, the overall conclusions seem intuitively right.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Worth Reading “The Money Formula” by Paul Wilmott and David Orrell.

The full title of this book, co-written by Paul Wilmott and David Orrell, is “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took over the Markets“. There are plenty of critiques of modelling and quantitative finance by outsiders throwing rocks but Wilmott is a quant and brings an insider’s technical knowledge to the question of what these tools can do, can’t do and perhaps most importantly should not be used to do. Consequently, the book offers a more nuanced perspective on the strengths and limitations of quantitative finance as opposed to the let’s scrap the whole thing school of thought. I have made some more detailed notes which follow the structure of the book but this post focuses on a couple of ideas I found especially interesting or useful.

I am not a quant so my comments should be read with that in mind but the core idea I took away is that, much as quants would want it otherwise, markets are not determined by fundamental laws, deterministic or probabilistic that allow risk to be measured with precision. These ideas work reasonably well within their “zone of validity” but a more complete answer (or model) has to recognise where the zones stop and uncertainty rules.  Wilmott and Orrell argue market outcomes are better thought of as the “emergent result of complex transactions”. The role of money in these emergent results is especially important, as is the capacity of models themselves to materially reshape the risk of the markets they are attempting to measure.

The Role of Money

Some quotes I have drawn from Chapter 8, will let the authors speak for themselves on the role of money …

Consider …. the nature of money. Standard economic definitions of money concentrate on its roles as a “medium of exchange,” a “store of value,” and a “unit of account.” Economists such as Paul Samuelson have focused in particular on the first, defining money as “anything that serves as a commonly accepted medium of exchange.” … ” Money is therefore not something important in itself; it is only a kind of token. The overall picture is of the economy as a giant barter system, with money acting as an inert facilitator.” (emphasis added)

“However … money is far more interesting than that, and actually harbors its own kind of lively, dualistic properties. In particular, it merges two things, number and value, which have very different properties:number lives in the abstract, virtual world of mathematics, while valued objects live in the real world. But money seems to be an active part of the system. So ignoring it misses important relationships. The tension between these contradictory aspects is what gives money its powerful and paradoxical qualities.” (Emphasis added)

The real and the virtual become blurred, in physics or in finance. And just as Newtonian theories break down in physics, so our Newtonian approach to money breaks down in economics. In particular, one consequence is that we have tended to take debt less seriously than we should. (emphasis added)

Instead of facing up to the intrinsically uncertain nature of money and the economy, relaxing some of those tidy assumptions, accepting that markets have emergent properties that resist reduction to simple laws, and building a new and more realistic theory of economics, quants instead glommed on to the idea that, when a system is unpredictable, you can just switch to making probabilistic predictions.” (emphasis added)

“The efficient market hypothesis, for example, was based on the mechanical analogy that markets are stable and perturbed randomly by the actions of atomistic individuals. This led to probabilistic risk-analysis tools such as VaR. However, in reality, the “atoms” are not independent, but are closely linked … The result is the non-equilibrium behaviour … observed in real markets. Markets are unpredictable not because they are efficient, but because of a financial version of the uncertainty principle.” (emphasis added)

 The Role of Models

Wilmott & Orrell devote a lot of attention to the ways in which models no longer just describe, but start to influence, the markets being modelled mostly by encouraging people to take on more risk based in part on a false sense of security …

“Because of the bankers’ insistence on treating complex finance as a university end-of-term exam in probability theory, many of the risks in the system are hidden. And when risks are hidden, one is led into a false sense of security. More risk is taken so that when the inevitable happens, it is worse than it could have been. Eventually the probabilities break down, disastrous events become correlated, the cascade of dominoes is triggered, and we have systemic risk …. None of this would matter if the numbers were small … but the numbers are huge” (Chapter 10 – emphasis added)

They see High Frequency Trading as the area likely to give rise to a future systemic crisis but also make a broader point about the tension between efficiency and resilience..

“With complex systems, there is usually a trade-off between efficiency and robustness …. Introducing friction into the system – for example by putting regulatory brakes on HFT – will slow the markets, but also make them more transparent and reliable. If we want a more robust and resilient system then we probably need to agree to forego some efficiency” (Chapter 10 – emphasis added)

The Laws of Finance

Wilmott and Orrell note the extent to which finance has attempted to identify laws which are analogous to the laws of physics and the ways in which these “laws” have proved to be more of a rough guide.

 “… the “law of supply and demand” …states that the market for a particular product has a certain supply, which tends to increase as the price goes up (more suppliers enter the market). There is also a certain demand for the product, which increases as the price goes down.”

“… while the supply and demand picture might capture a general fuzzy principle, it is far from being a law. For one thing, there is no such thing as a stable “demand” that we can measure independently –there are only transactions.”

“Also, the desire for a product is not independent of supply, or other factors, so it isn’t possible to think of supply and demand as two separate lines. Part of the attraction of luxury goods –or for that matter more basic things, such as housing –is exactly that their supply is limited. And when their price goes up, they are often perceived as more desirable, not less.” (emphasis added)

This example is relevant for banking systems (such as Australia) where residential mortgage lending dominates the balance sheets of the banks. Even more so given that public debate of the risk associated with housing seems often to be predicated on the economics 101 version of the laws of supply and demand.

The Power (and Danger) of Ideas

A recurring theme throughout the book is the ways in which economists and quants have borrowed ideas from physics without recognising the limitations of the analogies and assumptions they have relied on to do so. Wilmott and Orrell credit Sir Issac Newton as one of the inspirations behind Adam Smith’s idea of the “Invisible Hand” co-ordinating  the self interested actions of individuals for the good of society. When the quantum revolution saw physics embrace a probabilistic approach, economists followed.

I don’t think Wilmott and Orrell make this point directly but a recurring thought reading the book was the power of ideas to not just interpret the underlying reality but also to shape the way the economy and society develops not always for the better.

  • Economic laws that drive markets towards equilibrium as their natural state
  • The “invisible hand” operating in markets to reconcile individual self interest with optimal outcomes for society as a whole
  • The Efficient Market Hypothesis as an explanation for why markets are unpredictable

These ideas have widely influenced quantitative finance in a variety of domains and they all contribute useful insights; the key is to not lose sight of their zone of validity.

…. Finance … took exactly the wrong lesson from the quantum revolution. It held on to its Newtonian, mechanistic, symmetric picture of an intrinsically stable economy guided to equilibrium by Adam Smith’s invisible hand. But it adopted the probabilistic mathematics of stochastic calculus.” (emphasis added) Chapter 8

Where to from here?

It should be obvious by now that the authors are arguing that risk and reward cannot be reduced to hard numbers in the ways that physics has used similar principles and tools to generate practical insights into how the world works. Applying a bit of simple math in finance seems to open up the door to getting some control over an unpredictable world and, even better, to pursue optimisation strategies that allow the cognoscenti to optimise the balance between risk and reward. There is room for more complex math as well for those so inclined but the book sides with the increasingly widely held views that simple math is enough to get you into trouble and further complexity is best avoided if possible.

Wilmott and Orrell highlight mathematical biology in general and a book by Jim Murray on the topic as a source for better ways to approach many of the more difficult modelling challenges in finance and economics. They start by listing a series of phenomena in biological models that seem to be useful analogues for what happens in financial markets. They concede that a number of models used in mathematical biology that are almost all “toy” models. None of these models offer precise or determined outcomes but all can be used to explain what is happening in nature and offer insights into solutions for problems like disease control, epidemics, conservation etc.

The approach they advocate seems have a lot in common with the Agent Based Modelling approach that Andrew Haldane references (see his paper on “Tails of the Unexpected“) and that is the focus of Bookstabber’s book (“The End of Theory”).

In their words …

“Embrace the fact that the models are toy, and learn to work within any limitations.”

Focus more attention on measuring and managing resulting model risk, and less time on complicated new products.”

“… only by remaining both skeptical and agile can we learn. Keep your models simple, but remember they are just things you made up, and be ready to update them as new information comes in.”

I fear I have not done the book justice but I got a lot out of it and can recommend it highly.