JP Koning on stablecoins

Interesting post by JP Koning arguing that the market is progressively recognising the difference between the safer and riskier forms of stablecoins …

Tether vs the New York stablecoins http://jpkoning.blogspot.com/2022/01/tether-vs-new-york-stablecoins.html

Tony – From the Outside

Never let the facts stand in the way of a good story

Shout out to Tim Harford for this introduction to the study of how, in his words, ignorance can be deliberately produced. The technical term “agnatology” is I suspect unlikely to catch on but the underlying message is one worth understanding. At a minimum it is a handy addition to your Scrabble dictionary.

The article was originally published in March 2017 but I only came across it recently via this podcast interview Harford did with Cardiff Garcia on “The New Bazaar”. The context in 2017 was the successful campaign for the US presidency that Donald Trump ran during 2016 with a bit of Brexit thrown in but this is a challenge that is not going away anytime soon.

Harford notes that it is tempting to think that the answer to the challenge posed by what has come to be known as a post truth society lies in a better process to establish the facts

The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts.

He affirms the need to have some agreement on how we distinguish facts from opinions and assertions but he cautions that this is unlikely to solve the problem. He cites the tobacco industry response to the early evidence that smoking causes cancer to illustrate why facts alone are not enough.

A good place to start is by delving into why facts alone are not enough – a few extracts from the article hopefully capture the main lessons

Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again…

Tempting as it is to fight lies with facts, there are three problems with that strategy…

The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind…

There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?…

In the war of ideas, boredom and distraction are powerful weapons.
The endgame of these distractions is that matters of vital importance become too boring to bother reporting…

There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth…

The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe…

We see what we want to see — and we reject the facts that threaten our sense of who we are…

When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”…

Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees…

When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.

So what are we to do?

Harford cites a study that explores the value of scientific curiosity

What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”

It is of course entirely possible that Tim Harford’s assessment is just calling to my own bias. I will admit that one the things that I always looked for when hiring, or working, with people was curiosity. These people are surprisingly rare but (IMHO) worth their weight in gold. An intellectually curious mind makes up for a lot of other areas where the person might not be perfect in terms of skills or experience. The general point (I think) also ties to the often cited problem that people with lots of knowledge can sometimes be prone to not being so street smart. Nassim Taleb makes this argument in nearly everything he writes.

So Tim Harford might not be offering the entire answer but I think his article is worth reading on two counts

  • Firstly as a cautionary tale against expecting that all debates and disputes can be resolved by simply establishing the “facts”
  • Secondly as a reminder of the power of a curious mind and the value of the never-ending search for “what am I missing?”

Let me know what I am missing

Tony – From the Outside

A (the?) main move in finance

Matt Levine’s Money Stuff column (Bloomberg Opinion) had a great piece today which, while nominally focussed on the enduring question of “Looking for Tether’s Money”, is worth reading for the neat summary he offers of how finance turns risky assets into safe assets. The column is behind a paywall but you can access it for free by signing up for his daily newsletter.

This particular piece of the magic of finance is of course achieved by dividing up claims on risky assets into tranches with differing levels of seniority. In Matt’s words…

Most of what happens in finance is some form of this move. And the reason for that is basically that some people want to own safe things, because they have money that they don’t want to lose, and other people want to own risky things, because they have money that they want to turn into more money. If you have something that is moderately risky, someone will buy it, but if you slice it into things that are super safe and things that are super risky, more people might buy them. Financial theory suggests that this is impossible but virtually all of financial practice disagrees. 

Money Stuff, Matt Levine Bloomberg, 7 October 2021

Matt also offers a neat description of how this works in banking

A bank makes a bunch of loans in exchange for senior claims on businesses, houses, etc. Then it pools those loans together on its balance sheet and issues a bunch of different claims on them. The most senior claims, classically, are “bank deposits”; the most junior claims are “equity” or “capital.” Some people want to own a bank; they think that First Bank of X is good at running its business and will grow its assets and improve its margins and its stock will be worth more in the future, so they buy equity (shares of stock) of the bank. Other people, though, just want to keep their money safe; they put their deposits in the First Bank of X because they are confident that a dollar deposited in an account there will always be worth a dollar.

The fundamental reason for this confidence is that bank deposits are senior claims (deposits) on a pool of senior claims (loans) on a diversified set of good assets (businesses, houses). (In modern banking there are other reasons — deposit insurance, etc. — but this is the fundamental reason.) But notice that this is magic: At one end of the process you have risky businesses, at the other end of the process you have perfectly safe dollars. Again, this is due in part to deposit insurance and regulation and lenders of last resort, but it is due mainly to the magic of composing senior claims on senior claims. You use seniority to turn risky things into safe things

He then applies these principles to the alternative financial world that has been created around crypto assets to explore how the same factors drive both the need/demand for stablecoins and the ways in which crypto finance can meet the demand for safe assets (well “safer” at least).

The one part of his explanation I would take issue with is that he could have delved deeper into the question of whether crypto users require stablecoins to exhibit the same level of risk free exchangeability that we expect of bank deposits in the conventional financial world.

Matt writes…

The people who live in Bitcoin world are people like anyone else. Some of them (quite a lot of them by all accounts) want lots of risk: They are there to gamble; their goal is to increase their money as much as possible. Bitcoin is volatile, but levered Bitcoin is even more volatile, and volatility is what they want.

Others want no risk. They want to put their money into a thing worth a dollar, and be sure that no matter what they’ll get their dollar back. But they don’t want to do that in a bank account or whatever, because they want their dollar to live in crypto world. What they want is a “stablecoin”: A thing that lives on the blockchain, is easily exchangeable for Bitcoin (or other crypto assets) using the tools and exchanges and brokerages and processes of crypto world, but is always worth a dollar

The label “stable” is a relative term so it is not obvious to me that people operating in the crypto financial asset world all necessarily want the absolute certainty of a coin that always trade at par value to the underlying fiat currency. Maybe they do but maybe some are happy with something that is stable enough to do the job of allowing them to do the exchanges they want to do in risky crypto assets. Certainly they already face other costs like gas fees when they trade so maybe something that trades within an acceptable range of par value is good enough?

What it comes down to is first defining exactly what kind of promise the stablecoin backer is making before we start down the path of defining exactly how that promise should be regulated. I do think that the future of stablecoins is likely to be more regulated and that is likely to be a net positive outcome. The term “stablecoin” however encompasses a wide variety of structures and intended uses. The right kind of regulation will be designed with these differences in mind. That said, some of the stablecoin issuers have not done themselves any favours in the loose ways in which they have defined their promise.

Matt’s column is well worth reading if you can access it but the brief outline above flags some of the key ideas and the issues that I took away. The ways in which seniority in the loss hierarchy creates safety (or what Gary Gorton refers to as “information insensitivity”) is I think the key insight. I frequently encounter papers and articles discussing the role of bank deposits as the primary form of money in developed economies. These nearly always mention prudential regulation, supervision and deposit insurance but the role of deposit preference is often overlooked. For anyone looking to dig a bit deeper, I did a post here offering an Australian perspective on how this works.

Tony – From the Outside

The Paradox of DeFi

Nic Carter and Linda Jeng have produced a useful paper titled “DeFi Protocol Risks: the Paradox of DeFi” that explores the risks that DeFi will need to address and navigate if it is to deliver on the promises that they believe it can. There is of course plenty of scepticism about the potential for blockchain and DeFi to change the future of finance (including from me). What makes this paper interesting is that it is written by two people involved in trying to make the systems work as opposed to simply throwing rocks from the sidelines.

Linda Jeng has a regulatory back ground but is currently the Global Head of Policy at Transparent Financial Systems. Nic is a General Partner at a seed-stage venture capital film that invests in blockchain related businesses. The paper they have written will contribute a chapter to a book being edited by Bill Coen (former Secretary General of the Basel Committee on Banking Supervision) and Diane Maurice to be titled “Regtech, Suptech and Beyond: Innovation and Technology in Financial Services” (RiskBooks).

Linda and Nic conceptually bucket DeFi risks into five categories: 

  1. interconnections with the traditional financial system, 
  2. operational risks stemming from underlying blockchains, 
  3. smart contract-based vulnerabilities, 
  4. other governance and regulatory risks, and 
  5. scalability challenges.

… and map out the relationships in this schematic

Conclusion: “No Free Lunch”

The paper concludes around the long standing principle firmly entrenched in the traditional financial world – there is “no free lunch”. Risk can be transformed but it is very hard to eliminate completely. Expressed another way, there is an inherent trade off in any system between efficiency and resilience.

Many of the things that make DeFi low cost and innovative also create operational risk and other challenges. Smart contracts sound cool, but when you frame them as “automated, hard-to-intervene contracts” it is easy to see they can also amplify risks. Scalability is identified as an especially hard problem if you are not willing to compromise on the principles that underpinned the original DeFI vision.

The paper is worth a read but if you are time poor then you can also read a short version via this post on Linda Jeng’s blog. Izabella Kaminska (FT Alphaville) also wrote about the paper here.

Tony – From the Outside

A sceptical look at ESG

Anyone with more than a casual interest in business will be familiar with the increased focus on Environmental, Social and Governance (ESG) issues. There are sound arguments being made on both sides of the debate but I will admit upfront that I approach the topic with a somewhat ESG positive bias. Given my bias, it is all the more important to pay attention to what the sceptics are calling out rather than looking for affirmation amongst the true believers.

A post by Aswath Damodaran titled “Sounding good or Doing good? A Skeptical Look at ESG” is one of the better contributions to the ESG debate that I have encountered. I discussed one of his earlier contributions to the debate here and it is clear that he is not a fan of ESG. I am still working through his arguments but I like the analytical framework he employs and the way in which he supports his arguments with evidence.

I intend to do a couple of posts digging down into the ESG debate using Damodaran’s post and few other sources but want to start by laying out his arguments with some very limited comments.

Damodaran starts by framing ESG as part of a tradition of business ideas that have tended to prove to be more noise than substance, describing the ESG “sales pitch” as follows

“Companies that improve their social goodness standing will not only become more profitable and valuable over time, we are told, but they will also advance society’s best interests, thus resolving one of the fundamental conflicts of private enterprise, while also enriching investors”

There is no doubt that ESG, like many other business ideas, is prone to being over-hyped. There is room to take issue with the question of whether this is a fair description of the ESG movement as a whole. My gut feel is that presenting the “sales pitch” version is not representative of ESG advocates who genuinely believe that ESG can address problems in the ways the market currently operate, but it will be more productive to focus on the specific weaknesses that Damodaran discusses.

Damodaran starts with the problem of measurement

“Any attempts to measure environment and social goodness face two challenges. 

– The first is that much of social impact is qualitative, and developing a numerical value for that impact is difficult to do. 

– The second is even trickier, which is that there is little consensus on what social impacts to measure, and the weights to assign to them.”  

Assuming the measurement issues can be resolved, the second problem is identifying exactly how incorporating ESG factors into the business model or strategy contributes to improving the value of a company. Damodaran uses the following generic model of value drivers to explore this question

Figure 1: The Drivers of Value

Using this framework, Damodaran identifies two ways in which a company can derive benefits from incorporating ESG principles into its business strategy

  1. Goodness is rewarded – i.e. companies behave in a socially responsible way because it creates positive outcomes for their business
  2. Badness is punished – i.e. companies behave in a socially responsible way because bad behaviour is punished

Damodaran also identifies a third scenario in which “The bad guys win”

“In this scenario, bad companies mouth platitudes about social responsibility and environmental consciousness without taking any real action, but customers buy their products and services, either because they are cheaper or because of convenience, employees continue to work for them because they can earn more at these companies or have no options, and investors buy their shares because they deliver higher profits. As a result, bad companies may score low on corporate responsibility scales, but they will score high on profitability and stock price performance.”

Damodaran argues that the evidence supports the following conclusions:

  1. A weak link to profitability

“There are meta studies (summaries of all other studies) that  summarize hundreds of ESG research papers, and find a small positive link between ESG and profitability, but one that is very sensitive to how profits are measured and over what period, leading one of these studies to conclude that “citizens looking for solutions from any quarter to cure society’s pressing ills ought not appeal to financial returns alone to mobilize corporate involvement”. Breaking down ESG into its component parts, some studies find that environment (E) offered the strongest positive link to performance and social (S) the weakest, with governance (G) falling in the middle.”

2) A stronger link to funding costs

Studies of “sin” stocks, i.e., companies involved in businesses such as producing alcohol, tobacco, and gaming, find that these stocks are less commonly held by institutions, and that they face higher costs for funding, from equity and debt). The evidence for this is strongest in sectors like tobacco (starting in the 1990s) and fossil fuels (especially in the last decade), but these findings come with a troubling catch. While these companies face higher costs, and have lower value, investors in these companies will generate higher returns from holding these stocks.”

3) Some evidence that ESG focussed companies do reduce their risk of failure or exposure to disaster risk

“An alternate reason why companies would want to be “good” is that “bad” companies are exposed to disaster risks, where a combination of missteps by the company, luck, and a failure to build in enough protective controls (because they cost too much) can cause a disaster, either in human or financial terms. That disaster can not only cause substantial losses for the company, but the collateral reputation damage created can have long term consequences. One study created a value-weighted portfolio of controversial firms that had a history of violating ESG rules, and reported negative excess returns of 3.5% on this portfolio, even after controlling for risk, industry, and company characteristics. The conclusion in this study was that these lower excess returns are evidence that being socially irresponsible is costly for firms, and that markets do not fully incorporate the consequences of bad corporate behavior. The push back from skeptics is that not all firms that behave badly get embroiled in controversy, and it is possible that looking at just firms that are controversial creates a selection bias that explains the negative returns.”

Damodaran sums up his argument

“There is a weak link between ESG and operating performance (growth and profitability), and while some firms benefit from being good, many do not. Telling firms that being socially responsible will deliver higher growth, profits and value is false advertising. The evidence is stronger that bad firms get punished, either with higher funding costs or with a greater incidence of disasters and shocks. ESG advocates are on much stronger ground telling companies not to be bad, than telling companies to be good. In short, expensive gestures by publicly traded companies to make themselves look “good” are futile, both in terms of improving performance and delivering returns.”

There is a lot more to say on this topic. The evidence that certain types of companies do get punished for failing to be socially responsible is especially interesting. I see a fair degree of cynicism applied to the ESG stance adopted by the Australia banks but I suspect they are a good example of the type of company that will in fact benefit from making real investments in socially responsible business strategies.

Tony – From the Outside

What does the “economic perspective” add to an ICAAP?

… is the question I reflected on as I read the ECB Report on Banks’ ICAAP Practices (August 2020).

That I should be asking the question is even more curious given the years I spent working with economic capital but there was something in the ECB position that I was not comfortable with. There is nothing particularly wrong in the ways that the ECB envisages that an economic perspective can add value to a bank’s ICAAP. The problem (for me), I came to realise, is more the lack of emphasis on recognising the fundamental limitations of economic models. In short, my concern is that the detailed focus on risk potentially comes at the expense of an equally useful consideration of the ways in which a bank is subject to radical uncertainty.

The rest of this post offers an overview of what the ECB survey observed and some thoughts on the value of explicitly incorporating radical uncertainty into an ICAAP.

The ECB report sample set

The ECB report, based on a survey of 37 significant institutions it supervises, assesses the extent to which these organisations were complying (as at April 2019) with ECB expectations for how the ICAAP should be constructed and executed. The selected sample focuses on the larger (and presumably more sophisticated) banks, including all global systematically important banks supervised by the ECB. I am straying outside my area of expertise (Australian bank capital management) in this post but there is always something to learn from considering another perspective.

The ECB assessment on ICAAP practices

The ECB notes that progress has been made in some areas of the ICAAP. In particular; all banks in the survey have risk identification processes in place, they produce summary documents (“Capital Adequacy Statements” in ECB parlance) that enable bank management (not just the technical specialists) to engage with and take responsibility for the capital strength of their bank and the sample banks do incorporate stress testing into their capital planning process.

The ECB believes however that there is still a lot of room for improvement. The general area of concern is that the banks it supervises are still not paying sufficient attention to the question of business continuity. The ECB cites three key areas as being particularly in need of improvement if the ICAAPs are to play their assigned role in effectively contributing to a bank’s continuity:

  1. Data quality
  2. The application of the “Economic Perspective” in the ICAAP
  3. Stress testing

The value of building the ICAAP on sound data and testing the outcomes of the process under a variety of severe stress scenarios is I think uncontentious.

The value the economic perspective contributes is less black and white. Like many thing in life, the challenge is to get the balance right. My perspective is that economic models are quite useful but they are far from a complete answer and dangerous when they create an illusion of knowledge, certainty and control.

The economic internal perspective

The ECB’s guide to the ICAAP defines the term “economic internal perspective” as follows:

“Under this perspective, the institution’s assessment is expected to cover the full universe of risks that may have a material impact on its capital position from an economic perspective. In order to capture the undisguised economic situation, this perspective is not based on accounting or regulatory provisions. Rather, it should take into account economic value considerations for all economically relevant aspects, including assets, liabilities and risks. …. The institution is expected to manage economic risks and assess them as part of its stress-testing framework and its monitoring and management of capital adequacy”

ECB Guide to the internal capital adequacy assessment process (ICAAP) – Principles, November 2018 (Paragraph 49 / pages 18-19)

So far so good – the key points seem (to me) to be quite fair as statements of principle.

The ECB sees value in looking beyond the accounting and regulatory measures that drive the reported capital ratios (the “normative perspective” in ECB terminology) and wants banks to consider “the full universe of risks that may have a material impact on its capital position”. The ECB Report also emphasises the importance of thinking about capital from a “business continuity” perspective and cites the “… unjustified inclusions of certain capital components (e.g. minority interests, Additional Tier 1 … or Tier 2 … instruments) … which can inflate the internal capital figures” as evidence of banks failing to meet this expectation. Again a fair point in my view.

These are all worthy objectives but I wonder

  • firstly about the capacity of economic capital models to reliably deliver the kinds of insights the ECB expects and
  • secondly whether there are more cost effective ways to achieve similar outcomes.

The value of a different perspective

As a statement of principle, the value of bringing a different perspective to bear clearly has value. The examples that the ECB cites for ways in which the economic perspective can inform and enhance the normative perspective are all perfectly valid and potentially useful. My concern is that the ECB seems to be pursuing an ideal state in which an ICAAP can, with sufficient commitment and resources, achieve a degree of knowledge that enables a bank to control its future.

Business continuity is ultimately founded on a recognition that there are limits to what we can know about the future and I side with the risk philosophy that no amount of analysis will fundamentally change this.

The ECB’s economic perspective does not neccesarily capture radical uncertainty

I have touched on the general topic of uncertainty and what it means for the ICAAP a couple of times in this blog. The ECB report mentions “uncertainty” twice; once in the context of assessing climate change risk

Given the uncertainty surrounding the timing of climate change and its negative consequences, as well as the potentially far-reaching impact in breadth and magnitude along several transmission channels via which climate-related risks may impact banks’ capital adequacy, it is rather concerning that almost one-third of the banks has not even considered these risks in their risk identification processes at all.

Page 39

… and then in the context of making allowances for data quality

However, … in an internal deep dive on risk quantification in 2019, half of the risk quantifications showed material deficiencies. This finding is exacerbated by the data quality issues generally observed and moreover by the fact that one-half of the banks does not systematically ensure that the uncertainty surrounding the accuracy of risk quantifications (model risk) is appropriately addressed by an increased level of conservatism. 

Page 54

This is not a question of whether we should expect that banks can demonstrate that they are thinking about climate change and making allowances for model risk along with a host of other plausible sources of adverse outcomes. It is a surprise that any relatively large and sophisticated banks might be found wanting in the ways in which these risks are being assessed and the ECB is right to call that out.

However, it is equally surprising (for me at least) that the ECB did not seem to see value in systematically exploring the extent to which the ICAAPs of the banks it supervises deal with the potential for radical uncertainty.

Business continuity is far more likely if banks can also demonstrate that they recognise the limits of what they can know about the future and actively plan to deal with being surprised by the unexpected. In short one of the key ICAAP practices I would be looking for is evidence that banks have explicitly made allowances for the potential for their capital plan to have to navigate and absorb “unknown unknowns”.

For what it is worth, my template for how a bank might make explicit allowances in the ICAAP for unknown unknowns is included in this post on the construction of calibration of cyclical capital buffers. My posts on the broader issue of risk versus uncertainty can be found on the following links:

Feel free to let me know what I am missing …

Tony – From the Outside

Worth reading – “Radical Uncertainty: Decision-Making for an Unknowable Future” by John Kay and Mervyn King

I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.

Kay and King cover a lot of ground but, simply put, their book is about

“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.” 

One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.

As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.

The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.

Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.

Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.

They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty

But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.

There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.

“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”

Keynes and Knight lost the debate

Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions. 

Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.

There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:

There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.

One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.

Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.

I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.

Tony (From the Outside)

Probabilities disguising uncertainty

In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.

Barack Obama commenting on making the decision whether to attack a target which evidence suggested could be Osama Bin Laden

This quote is a drawn from an article that John Kay published on his website under the title “The point of probabilities”. The point he is making is

  • Similar to one touched on in a Bank Underground post that I discussed in a recent post on my blog.
  • Short and worth reading

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony