Why we fail to prepare for disasters

Tim Harford (The Undercover Economist) offers a short and readable account here of some of the reasons why, faced with clear risks, we still fail to act. We can see the problem, typically one of many, but don’t do enough to manage or mitigate the risk. New Orleans’ experiences with severe weather events features prominently as does (not surprisingly) COVID 19.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

Why we fail to prepare for disasters; Tim Harford (The Undercover Economist)

Another big part of the problem is that the cost of being fully prepared can be more than we are willing to pay. Especially when there is continuous pressure to find cost economies in the here and now

Serious scenarios are useful, but … no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

So the key points I took from his post:

  • Sometimes it can be something genuinely new and unexpected (i.e. Black Swan events) but risks we are well aware of can be equally damaging
  • Part of the problem is that we are social animals and take our cues from what the rest of the herd is doing (“normalcy bias” or “negative panic”)
  • Even where we understand the statistics and know that someone will be impacted, we tend to assume it will be someone else or someone else’s family (“optimism bias”)
  • We are especially bad at understanding risks that have an exponential driver (“exponential myopia”)
  • We are also quite good at finding reasons to justify ignoring risks we want to ignore or otherwise find inconvenient (“wishful thinking”)
  • Last, but far from least, efficiency is the enemy of resilience.

We need to remember that most of the factors listed above can also be useful in many other contexts (arguably most of the time). A tendency not to panic can be pretty useful and optimism has helped dreamers and ordinary people achieve many great things that have benefited the herd. Efficiency as a rule seems like a good thing to strive for.

Harford does not offer any easy answers but his post touches on issues that I have also been considering in Kay and King’s book titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. I have done a couple of posts on that book already (here and here) and am working on a final one that focuses on Chapters 8-16 which set out their ideas for how we navigate a world prone to radical uncertainty.

Tony – From the Outside

The why of Radical Uncertainty

A recent post offered an overview of a book by John Kay and Mervyn King titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. It is a rich topic and this post covers the underlying drivers that tend to result in radically uncertain outcomes.

Kay and King nominate “reflexivity” as a key driver of radical uncertainty

The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers … although their perspective on the problem and its solution would be very different.

Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.

Kay and King, Chapter 3: Radical Uncertainty is Everywhere”

Radical uncertainty also features in Richard Bookstaber’s book “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. Bookstaber identifies four broad phenomena he argues are endemic to financial crises

Emergent phenomena.
“When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.

Non-ergodicity.
“An ergodic process … is one that does not vary with time or experience.
Our world is not ergodic—yet economists treat it as though it is.”

Radical uncertainty.
“Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”

Computational irreducibility.
“There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.

Bookstaber, Chapter 2: Being Human

If you want to delve into the detail of why the world can be radically uncertain then Bookstaber arguably offers the more detailed account; albeit one couched in technical language like emergent phenomena, ergodicity and computational irreducibility. In Chapter 10 he lays out the ways in which an agent based modelling approach to the problem of radical uncertainty would need to specify the complexity of the system in a structured way that takes account of the amount of information required to describe the system and the connectedness of its components. Bookstaber also offers examples of emergent phenomena in seemingly simple systems (e.g. Gary Conways’s “Game of Life”) which give rise to surprisingly complex outcomes.

I am not sure if either book makes this point explicitly but I think there is also an underlying theme in which the models that provide the illusion of control over an uncertain future create an incentive to “manage” risk in ways that increases the odds of bad outcomes based on insufficient resilience. That seems to be the clear implication of Kay and King’s discussion of the limits of finance theory (Chapter 17: The World of Finance). They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama but highlight the ways in which it has failed to live up to its promiseI .

We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.

Kay and King, Chapter 17 The World of Finance

I plan to do one more post exploring the ways in which we navigate a world of radical uncertainty.

Tony (From the Outside)

Worth reading – “Radical Uncertainty: Decision-Making for an Unknowable Future” by John Kay and Mervyn King

I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.

Kay and King cover a lot of ground but, simply put, their book is about

“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.” 

One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.

As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.

The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.

Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.

Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.

They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty

But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.

There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.

“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”

Keynes and Knight lost the debate

Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions. 

Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.

There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:

There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.

One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.

Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.

I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.

Tony (From the Outside)

Why the real economy needs a prudential authority too

Isabella Kaminska (FT Alphaville) offers an interesting perspective on ways in which prudential initiatives in the areas of capital, liquidity and bail-in that have strengthened the banking sector post GFC might be applied to the “real economy”.

The global financial crisis taught us that laissez-faire finance, when left to its own devices, tends to encourage extreme fragility by under capitalising the system for efficiency’s sake and making it far more systemically interdependent.

Pre-2008, banks operated on the thinnest of capital layers while taking extreme liquidity risk due to the presumption that wholesale liquidity markets would always be open and available to them. It was in this way that they saved on capital and liquidity costs and increased their return on equity.  

Regulatory responses to the crisis understandably focused on boosting resilience by hiking capital buffers, liquidity ratios and also by introducing new types of loss absorbing structures. While it’s still too early to claim regulatory efforts were a definitive success, it does seem by and large the measures have worked to stymie a greater financial crisis this time around.

But what the 2008 crisis response may have overlooked is that bolstering banks to protect the economy means very little if the underlying real economy remains as thinly spread and interconnected as the financial sector always used to be.

The assessment that these banking initiatives “means very little” is possibly overstating the case.  The problems we are facing today would be an order of magnitude greater if the banking system was not able to plays its part in the solution.

The core point, however, I think is absolutely on the money, the focus on efficiency comes at the expense of resilience. More importantly, a free market system, populated by economic agents pursuing their own interests shaped by a focus on relatively short term time horizons, does not seem to be well adapted for dealing with this problem on its own. The lessons prudential regulators learned about the limits of efficient markets and market discipline also apply in the real world.

Isabella looks at the way prudential capital and liquidity requirements operate in banking and draws analogies in the real economy. With respect to liquidity, she notes for example,

“… the just-in-time supply chain system can be viewed as the real economy’s version of a fractional reserve system, with reserves substitutable for inventories.  

Meanwhile, the real economy’s presumption that additional inventories can be sourced from third party wholesale suppliers at a price as and when demand dictates, is equivalent to the banking sector’s presumption that liquidity can always be sourced from wholesale markets.

Though there is obviously one important difference.

Unlike the banking sector, the real economy has no lender of last resort that can magically conjure up more intensive care beds or toilet paper at the stroke of a keyboard when runs on such resources manifest unexpectedly.  

So what are our options? Companies could increase their inventories (analogous to holding more liquid assets) or build excess capacity (analogous to building up a capital buffer) but it is very difficult for companies acting independently to do this if their competitors choose the short term cost efficient play and undercut them on price. The Prisoner’s Dilemma trumps market discipline and playing the long game.

Isabella frames the problem as follows:

short-term supply shortages can only be responded to with real world manufacturing capability, which itself is constrained by physical availability To that extent crisis responses can only really take two forms: 1) immediate investment in the build-up of new manufacturing capacity that can address the specific system shortages or, 2) the temporary reallocation of existing resources (with some adaptation cost) to new production purposes.

The problem with the first option is that it is not necessarily time efficient. Not every country has the capability to build two new hospitals from scratch in just 10 days. Nor the capacity to create unexpected supply just-in-time to deal with the problem.

New investment may not be economically optimal either. What happens to those hospitals when the crisis abates? Do they stand empty and idle? Do they get repurposed? Who will fund their maintenance and upkeep if they go unused? And at what cost to other vital services and goods?

Isabella’s proposal …

That leaves the reallocation of existing assets as the only sensible and economically efficient mitigatory response to surge-demand related crises like pandemic flu. But it’s clear that on that front we can be smarter about how we anticipate and prepare for such reallocation shocks. An obvious thing to do is to take a leaf out of banking regulators’ books, especially with regards to bail-inable capital, capital ratios and liquidity profiles.

Isabella offers two examples to illustrate her argument; one is power companies and the other is the health system.

She notes that power utilities manage demand-surge or supply-shock risk with interruptible contracts to industrial clients. She argues that these contracts equate to a type of bail-inable capital buffer, since the contracts allow utilities to temporarily suspend services to clients (at their cost) if and when critical needs are triggered elsewhere and supplies must be diverted.

I think she has a good point about the value of real options but I am less sure that bail-in is the right analogy. Bail-in is a permanent adjustment to the capital structure in which debt is converted to equity or written off. Preferably the former in order to maintain the loss hierarchy that would otherwise apply in liquidation. A contract that enables a temporary adjustment to expenses is a valuable option but not really a bail-in style option.

What she is identifying in this power utility example is more a company buying real options from its customers that reduces operating leverage by enabling the company to reduce the supply of service when it becomes expensive to supply. Companies that have high operating leverage have high fixed costs versus revenue and will, all other things being equal, tend to need to run more conservative financial leverage than companies with low operating leverage. So reduced operating leverage is a substitute for needing to hold more capital.

Isabella then explores the ways in which the liquidity, capital and bail-in analogies might be applied in healthcare. I can quibble with some of the analogies she draws to prudential capital and liquidity requirements. As an example of a capital requirement being applied to health care she proposes that …

“… governments could mandate makers of non-perishable emergency goods (such as medicines, toilet paper, face masks, hand sanitiser) to always keep two-weeks’ worth of additional supply on hand. And companies could also be mandated to maintain some share of total supply chain production capability entirely domestically, making them more resilient to globalised shocks”

 Two weeks supply looks more like a liquidity buffer than a capital buffer but that does not make the ideas any the less worth considering as a way of making the real economy more resilient. The banking system had its crisis during the GFC and the real economy is being tested this time around. There are arguments about whether the changes to banking went far enough but it is clearly a lot better placed to play its part in this crisis than it was in the last. The question Isabella poses is what kinds of structural change will be required to make the real economy more resilient in the face of the next crisis.

Another example of FT Alphaville being a reliable source of ideas and information to help you think more deeply about the world.

Tony (From the Outside)

IFRS 9 loan loss provisioning faces its first real test

My long held view has been that IFSR 9 adds to the procyclicality of the banking system (see here, here, and here) and that the answer to this aspect of procyclicality lies in the way that capital buffers interact with loan loss provisioning (here, here, and here).

So it was interesting to see an article in the Financial Times overnight headlined “New accounting rules pose threat to banks amid virus outbreak”. The headline may be a bit dramatic but it does draw attention to the IFRS 9 problem I have been concerned with for some time.

The article notes signs of a backlash against the accounting rules with the Association of German Banks lobbying for a “more flexible handling” of risk provisions under IFRS 9 and warning that the accounting requirements could “massively amplify” the impact of the crisis. I agree that the potential exists to amplify the crisis but also side with an unnamed “European banking executive” quoted in the article saying “IFRS 9, I hate it as a rule, but relaxing accounting standards in a crisis just doesn’t look right”.

There may be some scope for flexibility in the application of the accounting standards (not my area of expertise) but that looks to me like a dangerous and slippery path to tread. The better option is for flexibility in the capital requirements, capital buffers in particular. What we are experiencing is exactly the kind of adverse scenario that capital buffers are intended to absorb and so we should expect them to decline as loan loss provisions increase and revenue declines. More importantly we should be seeing this as a sign that the extra capital put in place post the GFC is performing its assigned task and not a sign, in and of itself, indicating distress.

This experience will also hopefully reinforce the case for ensuring that the default position is that the Counter Cyclical Capital Buffer be in place well before there are any signs that it might be required. APRA announced that it was looking at this policy in an announcement in December 2019 but sadly has not had the opportunity to fully explore the policy initiative and implement it.

Tony

Probabilities disguising uncertainty – Part II

This behavior makes one blind to all sorts of things. 

The first blind spot … is that it treats uncertain events – items of unknowable incidence and severity – as if they were risks that could be estimated probabilistically. 

Epsilon Theory ; “Lack of Imagination” 14 March 2020

One of my recent posts drew attention to an article by John Kay promoting a book he has co-written with Mervyn King on the topic of “radical uncertainty”. Epsilon Theory offers another useful perspective on the ways in which extending probabilistic thinking beyond its zone of validity can cause us to miss the big picture.

The Epsilon Theory post focusses on the Covid 19 fallout currently playing out but is also worth reading for the broader challenges it offers anyone trying to use models and probabilities to manage real world outcomes …

Tony

Probabilities disguising uncertainty

In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.

Barack Obama commenting on making the decision whether to attack a target which evidence suggested could be Osama Bin Laden

This quote is a drawn from an article that John Kay published on his website under the title “The point of probabilities”. The point he is making is

  • Similar to one touched on in a Bank Underground post that I discussed in a recent post on my blog.
  • Short and worth reading

Tony