Navigating a radically uncertain world

The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.

This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.

No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.

It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.

Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.

However, there were two things I found especially interesting in their analysis.

  • One was the argument that the “biases” catalogued by behavioural finance were not necessarily irrational when applied to a radically uncertain world.
  • The other was the emphasis they place on the idea of employing abductive reasoning and reference narratives to help navigate this radically uncertain future.

Behavioural Finance

Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.

Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…

…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.

I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.

Abductive Reasoning and Reference Narratives

The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.

If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.

The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.

Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Reference Narratives

If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.

Good and Bad Reference Narratives

Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.

Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.

Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,

…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.

Using Reference Narratives

Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).

Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..

Scenario planning is a way of ordering thoughts about the future, not of predicting it.

The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.

Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future

We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.

Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.

‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’

Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives

These extracts from their book nicely captures the essence of their argument

Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.

the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.

Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.


The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.

robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.

Summing Up

Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.

I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.

Tony – From The Outside

The why of Radical Uncertainty

A recent post offered an overview of a book by John Kay and Mervyn King titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. It is a rich topic and this post covers the underlying drivers that tend to result in radically uncertain outcomes.

Kay and King nominate “reflexivity” as a key driver of radical uncertainty

The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers … although their perspective on the problem and its solution would be very different.

Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.

Kay and King, Chapter 3: Radical Uncertainty is Everywhere”

Radical uncertainty also features in Richard Bookstaber’s book “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. Bookstaber identifies four broad phenomena he argues are endemic to financial crises

Emergent phenomena.
“When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.

Non-ergodicity.
“An ergodic process … is one that does not vary with time or experience.
Our world is not ergodic—yet economists treat it as though it is.”

Radical uncertainty.
“Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”

Computational irreducibility.
“There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.

Bookstaber, Chapter 2: Being Human

If you want to delve into the detail of why the world can be radically uncertain then Bookstaber arguably offers the more detailed account; albeit one couched in technical language like emergent phenomena, ergodicity and computational irreducibility. In Chapter 10 he lays out the ways in which an agent based modelling approach to the problem of radical uncertainty would need to specify the complexity of the system in a structured way that takes account of the amount of information required to describe the system and the connectedness of its components. Bookstaber also offers examples of emergent phenomena in seemingly simple systems (e.g. Gary Conways’s “Game of Life”) which give rise to surprisingly complex outcomes.

I am not sure if either book makes this point explicitly but I think there is also an underlying theme in which the models that provide the illusion of control over an uncertain future create an incentive to “manage” risk in ways that increases the odds of bad outcomes based on insufficient resilience. That seems to be the clear implication of Kay and King’s discussion of the limits of finance theory (Chapter 17: The World of Finance). They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama but highlight the ways in which it has failed to live up to its promiseI .

We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.

Kay and King, Chapter 17 The World of Finance

I plan to do one more post exploring the ways in which we navigate a world of radical uncertainty.

Tony (From the Outside)

Worth reading – “Radical Uncertainty: Decision-Making for an Unknowable Future” by John Kay and Mervyn King

I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.

Kay and King cover a lot of ground but, simply put, their book is about

“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.” 

One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.

As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.

The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.

Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.

Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.

They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty

But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.

There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.

“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”

Keynes and Knight lost the debate

Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions. 

Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.

There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:

There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.

One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.

Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.

I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.

Tony (From the Outside)

The Bankers’ New Clothes: Arguments for simpler capital and much reduced leverage

It always pays to make sure you expose yourself to the opposite view. This post looks at some of the arguments for simpler and higher bank capital requirements put forward by Professors Admati and Hellwig. They have published a number of papers and a book on the topic but this post refers chiefly to their book “The Bankers’ New Clothes” and to a paper ‘The Parade of the Banker’s New Clothes Continues: 31 Flawed Claims Debunked”. As I understand it, the key elements of their argument are that:

  • Banks are inherently risky businesses,
  • Excessive borrowing by banks increases their inherent riskiness, but
  • Banks are only able to maintain this excessive level of borrowing because
    • Flawed risk based capital models underestimate the true capital requirements of the business
    • Market discipline also allows excessive borrowing because it is assumed that the government will bail out banks if the situation turns out badly

They identify a variety of ways of dealing with the problem of excessive leverage (controls on bank lending, liquidity requirements and capital requirements) but argue that substantially more common equity is the best solution because:

  • It directly reduces the probability that a bank will fail (i.e. all other things being equal, more common equity reduces the risk of insolvency),
  • A higher level of solvency protection has the added benefit of also reducing the risk of illiquidity, and
  • Contrary to claims by the banking industry, there is no net cost to society in holding more common equity because the dilution in ROE will be offset by a decline in the required return on equity

They concede that there will be some cost associated with unwinding the Too Big To Fail (TBTF) benefit that large banks currently enjoy on both the amount banks can borrow and on the cost of that funding but argue there is still no net cost to society in unwinding this undeserved subsidy. The book, in particular, gets glowing reviews for offering a compelling case for requiring banks to operate with much lower levels of leverage and for pointing out the folly of risk based capital requirements.

There are a number of areas where I find myself in agreement with the points they argue but I can’t make the leap to accept their conclusion that much a higher capital requirement based on a simple leverage ratio calculation is the best solution. I have written this post to help me think through the challenges they offer my beliefs about how banks should be capitalised.

It is useful, I think, to first set out the areas where we (well me at least) might agree in principle with what they say; i.e.

  • Financial crises clearly do impose significant costs on society and excessive borrowing does tend to make a financial system fragile (the trick is to agree what is “excessive”)
  • Better regulation and supervision have a role to play in minimising the risk of bank failure (i.e. market discipline alone is probably not enough)
  • Public policy should consider all costs, not just those of the banking industry
  • All balance sheets embody a trade-off between enterprise risk, return and leverage (i.e. increasing leverage does increase risk)

It is less clear however that:

  • The economics of bank financing are subject to exactly the same rules as that which apply to non-financial companies (i.e. rather than asserting that banks should be compared with non-financial companies, it is important to understand how banks are different)
  • A policy of zero failure for banks is necessarily the right one, or indeed even achievable (i.e. would it be better to engineer ways in which banks can fail without dragging the economy down with them)
  • Fail safe mechanisms, such as the bail in of pre-positioned liabilities, have no prospect of working as intended
  • The assertion that “most” of the new regulation intended to make banks safer and easier to resolve has been “rejected, diluted or delayed” is a valid assessment of what has actually happened under Basel III
  • That liquidity events requiring lender of last resort support from the central bank are always a solvency problem

Drawing on some previous posts dealing with these issues (see here, here and here), I propose to focus on the following questions:

  • How does the cost of bank financing respond to changes in leverage?
  • Are the risk based capital requirements as fundamentally flawed as the authors claim?
  • Are risk management incentives for bankers always better when they are required to hold increasing levels of common equity?
  • Do the increased loss absorption features of Basel III compliant hybrids (in particular, the power to trigger conversion or bail in of the instruments) offer a way to impose losses on failed banks without disrupting the economy or requiring public support

How does leverage affect the cost of bank financing?

Increasing the proportion of equity funding, the authors argue, reduces the risk that shareholders are exposed to because each dollar of equity they have invested

“ will be affected less intensely by the uncertainty associated with the investments”

“when shareholders bear less risk per dollar invested, the rate of return they require is lower”

“Therefore, taking the costs of equity as fixed and independent of the mix of equity and debt involves a fundamental fallacy”.

Banker’sNew Clothes (p101)

The basic facts they set out are not really contentious; the mix of debt and equity does impact required returns. The authors focus on what happens to common equity but changing leverage impacts both debt and equity. This is very clear in the way that rating agencies consider all of the points nominated by the authors when assigning a debt rating. Reduced equity funding will likely lead to a decline in the senior and subordinated debt ratings and higher costs (plus reduced access to funding in absolute dollar terms) while higher equity will be a positive rating factor.

Banks are not immune to these fundamental laws but it is still useful to understand how the outcomes are shaped by the special features of a bank balance sheet. My views here incorporate two of the claims they “debunk” in their paper; specifically

Flawed Claim #4: The key insights from corporate finance about the economics of funding, including those of Modigliani and Miller, are not relevant for banks because banks are different from other companies

Flawed Claim #5: Banks are special because they create money

One of the features that defines a bank is the ability to take deposits. The cost of deposits however tends to be insulated from the effects of leverage. This is a design feature. Bank deposits are a major component of the money supply but need to be insensitive to adverse information about the issuing bank to function as money.

Wanting bank deposits to be information insensitive does not make them so. That is a function of their super senior position in the liability loss hierarchy, supplemented in many, if not most, banking systems by some form of limited deposit insurance (1). I credit a paper by Gary Gorton and George Pennacchi titled “Financial Intermediaries and Liquidity Creation” for crytalising this insight (an earlier post offers a short summary of that paper). Another paper titled “Why Bail-In? And How?” by Joseph Sommer proposes a different rationale for deposits having a super senior position insulated from the risk of insolvency but the implications for the impact of leverage on bank financing costs are much the same.

A large bank also relies on senior unsecured financing. This class of funding is more risky than deposits but still typically investment grade. This again is a design feature. Large banks target an investment grade rating in order to deliver, not only competitive financing costs, but equally (and perhaps more importantly) access to a larger pool of potential funding over a wider range of tenors. The investment grade rating depends of course on there being sufficient loss absorbing capital underwriting that outcome. There is no escaping this law of corporate finance. 

The debt rating of large banks is of course also tied up with the issue of banks being treated as Too Big To Fail (TBTF). That is a distortion in the market that needs to be addressed and the answer broadly is more capital though the rating agencies are reasonably agnostic on the form this capital should take in so far as the senior debt rating is concerned. Subject to having enough common equity anchoring the capital structure, more Tier 2 subordinated debt (or Tier 3 bail-in) will work just as well as more common equity for the purposes of reducing the value of implied government support currently embedded in the long term senior debt rating.

Admati and Hellwig are right – there is no free lunch in corporate finance

At this stage, all of this risk has to go somewhere. On that point I completely agree with Admati and Hellwig. There is no free lunch, the rating/risk of the senior tranches of financing depend on having enough of the right kinds of loss absorbing capital standing before them in the loss hierarchy. Where I part company is on the questions of how much capital is enough and what form it should take.

How much capital is (more than) enough?

Admati and Hellwig’s argument for more bank capital has two legs. Firstly, they note that banks are typically much more leveraged than industrial companies and question how can this be given the fundamental law of capital irrelevancy defined by Modigliani and Miller. Secondly, they argue that risk based capital requirements are fundamentally flawed and systematically under estimate how much capital is required.

Why are banks different?

Admati and Hellwig note that banks have less capital than industrial companies and conclude that this must be a result of the market relying on the assumption that banks will be bailed out. The existence of a government support uplift in the senior debt ratings of large banks is I think beyond debate. There is also broad support (even amongst many bankers) that this is not sound public policy and should ideally be unwound.

It is not obvious however that this wholly explains the difference in observed leverage. Rating agency models are relatively transparent in this regard (S&P in particular) and the additional capital required to achieve a rating uplift equivalent to the existing government support factor would still see banks more leveraged than the typical industrial company. Bank balance sheets do seem to be different from those of industrial companies.

Flawed risk models

The other leg to their argument is that risk based capital fundamentally under estimates capital requirements. I am broadly sympathetic to the sceptical view on how to use the outputs of risk models and have been for some time. An article I wrote in 2008, for example, challenged the convention of using a probability of default associated with the target debt rating to precisely calibrate the amount of capital a bank required.

The same basic concept of highly precise, high confidence level capital requirements is embedded in the Internal Ratings Based formula and was part of the reason the model results were misinterpreted and misused. Too many people assigned a degree of precision to the models that was not warranted. That does not mean however that risk models are totally useless.

Professors Admati and Hellwig use simple examples (e.g. how does the risk of loss increase if a personal borrower increases leverage on a home loan) to argue that banks need to hold more capital. While the basic principle is correct (all other things equal, leverage does increase risk), the authors’ discussion does not draw much (or possibly any?) attention to the way that requiring a borrower to have equity to support their borrowing reduces a bank’s exposure to movements in the value of the loan collateral.

In the examples presented, any decline in the value of the assets being financed flows through directly to the value of equity, with the inference that this would be true of a bank also. In practice, low risk weights assigned by banks to certain (low default – well secured) pools of lending reflect the existence of borrower’s equity that will absorb the first loss before the value of the loan itself is called into question.

A capital requirement for residential mortgages (typically one of the lowest risk weights and also most significant asset classes) that looks way too low when you note that house prices can easily decline by 10 or 20%, starts to make more sense when you recognise that that there is (or should be) a substantial pool of borrower equity taking the brunt of the initial decline in the value of collateral. The diversity of borrowers is also an important factor in reducing the credit risk of the exposures (though not necessarily the systemic risk of an overall meltdown in the economy). Where that is not the case (and hence the renewed focus on credit origination standards and macro prudential policy in general), then low risk weights are not justified.

I recognise that this argument (incorporating the value of the borrower’s equity) does not work for traded assets where the mark to market change in the value of the asset flows directly to the bank’s equity. It does however work for the kinds of assets on bank balance sheets that typically have very low risk weights (i.e. the primary concern of the leverage ratio advocates). It also does not preclude erring on the side of caution when calculating risk weights so long as the model respects the relative riskiness of the various assets impacting the value of equity.

How much also depends on the quality of risk management (and supervision)

The discussion of how much capital a bank requires should also recognise the distinction between how much a well managed bank needs and how much a poorly managed bank needs. In a sense, the authors are proposing that all banks, good and bad, should be made to hold the capital required by bad banks. Their focus on highlighting the risks of banking obscures the fact that prudent banking mitigates the downside and that well managed banks are not necessarily consigned to the extremes of risk the authors present as the norm of banking.

While not expressed in exactly that way, the distinction I am drawing is implicit in Basel III’s Total Loss Absorbing Capital (TLAC) requirements now being put in place. TLAC adds a substantial layer of additional loss absorption on top of already substantially strengthened common equity requirements. The base layer of capital can be thought of as what is required for a well managed, well supervised bank with a sound balance sheet and business model. APRA’s “Unquestionably Strong” benchmark for CET1 is a practical example of what this requirement looks like. The problem of course is that all banks argue they are good banks but the risk remains that they are in fact bad banks and we usually don’t find out the difference until it is too late. The higher TLAC requirement provides for this contingency.

What should count as capital?

I looked at this question in a recent post on the RBNZ’s proposal that virtually all of their TLAC requirement should be comprised of common equity. Admati and Hellwig side with the RBNZ but I believe that a mix of common equity and bail-in capital (along the lines proposed by APRA) is the better solution.

Read my earlier post for the long version, but the essence of my argument is that bail-in capital introduces a better discipline over bank management risk appetite than does holding more common equity. Calibrating common equity requirements to very high standards should always be the foundation of a bank capital structure. Capital buffers in particular should be calibrated to withstand very severe external shocks and to be resilient against some slippage in risk management.

The argument that shareholders’ need to have more “skin in the game” is very valid where the company is undercapitalised. Bail-in capital is not a substitute for getting the basics right. A bank that holds too little common equity, calibrated to an idealised view of both its own capabilities and of the capacity of the external environment to surprise the modellers, will likely find itself suppressing information that does not fit the model. Loss aversion then kicks in and management start taking more risk to win back that which was lost, just as Admati and Hellwig argue.

However, once you have achieved a position that is unquestionably strong, holding more common equity does not necessarily enhance risk management discipline. My experience in banking is that it may in fact be more likely to breed an undesirable sense of complacency or even to create pressure to improve returns. I know that the later is not a a winning strategy in the long run but in the short run the market frequently does not care.

What is the minimum return an equity investor requires?

One of the problems I find with a simplistic application of Modigliani & Miller’s (M&M) capital irrelevancy argument is that it does not seem to consider if there is a minimum threshold return for an equity investment below which the investment is no longer sufficiently attractive to investors who are being asked to take first loss positions in a company; i.e. where is the line between debt and equity where a return is simply not high enough to be attractive to equity investors?

Reframing the question in this way suggests that the debate between the authors and the bankers may be more about whether risk based capital adequacy models (including stress testing) can be trusted than it is about the limitations of M&M in the real world.

Summary

The author’s solution to prudential supervision of banks is a shock and awe approach to capital that seeks to make the risk of insolvency de minimus for good banks and bad. I have done my best to be open to their arguments and indeed do agree with a number of them. My primary concern with the path they advocate is that I do not believe the extra “skin in the game” generates the risk management benefits they claim.

I see more potential in pursuing a capital structure based on

  • a level of common equity that is robustly calibrated to the needs of a well managed (and well supervised) bank
  • incorporating a well designed counter cyclical capital buffer,
  • supplemented with another robust layer of bail-in capital that imposes real costs (and accountability) on the shareholders and management of banks for whom this level of common equity proves insufficent.

The authors argue that the authorities would never use these bail-in powers for fear of further destabilising funding markets. This is a valid area of debate but I believe they conflate the risks of imposing losses on bank depositors with the kinds of risks that professional bond investors have traditionally absorbed over many centuries of banking. The golden era in which the TBTF factor shielded bank bondholders from this risk is coming to the end but this broader investment class of bond holders has dealt with defaults by all kinds of borrowers. I am not sure why banks would be special in this regard if countries can default. The key issue is that the investors enter into the contract with the knowledge that they are at risk and are being paid a risk premium commensurate with the downside (which may not be that large if investors judge the banks to be well managed).

This is a complex topic so please let me know if I have missed something fundamental or have otherwise mis-represented Admati and Hellwig’s thesis. In the interim, I remain mostly unconvinced …

Tony

  1. It is worth noting that NZ has adopted a different path with respect to deposit protection, rejecting both deposit preference and deposit insurance. They also have a unique policy tool (Open Bank Resolution) that allows the RBNZ to impose losses on deposits as part of the resolution process. They are reviewing the case for deposit insurance and I believe should also reconsider deposit preference.

“The Origin of Financial Crises” by George Cooper

There are a lot of books on the topic of financial crises but this one, written in 2008, stand the test of time. At the very least, it offers a useful introduction to Minsky’s Financial Instability Hypothesis. There is also an interesting discussion of the alternative approaches adopted by central banks to the problem of financial stability.

George Cooper argues that our financial system is inherently unstable and that this tendency is accentuated by a combination of factors

  • The belief that market forces will tend to produce optimal allocations of capital, and
  • Monetary policy that seeks to smooth (and ideally eliminate) business cycle fluctuations in economic activity

Cooper draws heavily on Hyman Minsky’s Financial Instability Hypothesis (FIH) which he argues offers much better insight into the operation of the financial system than the  the Efficient Market Hypothesis (EMH) which tended to be the more influential driver of economic policy in the years preceding the Global Financial Crisis.

Cooper uses these competing theories to explore what makes prices within financial markets move. The EMH maintains that the forces of supply and demand will cause markets to move towards equilibrium and hence that we must look to external forces to understand unexpected shocks and crises. Minsky’s FIH, in contrast, argues that financial markets can be driven by internal forces into cycles of credit expansion and asset inflation followed by credit contraction and asset deflation.

Cooper identifies the following ways in which financial systems can become unstable

  • Markets characterised by supply constraints tend to experience price inflation which for a period of time can drive further increases in demand
  • Monetary policy which is oriented towards mitigating (and in some cases pre-empting) economic downturns can also amplify market instability (i.e. the Greenspan put makes the market less resilient in the long run)
  • Credit creation by private sector banks contributes to money supply growth; this in turn can facilitate growth in demand but there is no mechanism that automatically makes this growth consistent with the economy’s sustainable growth path

The point about some asset markets being prone to instability is particularly pertinent for banks that focus on residential property lending. Classical economic theory holds that increased prices should lead to increased supply and reduced demand but this simple equilibrium model does not necessarily work for property markets. Property buyers more often reason that they need to meet the market because it will only get more expensive if they wait. Many of them will have already seen this happen and regret not meeting the market price previously as they contemplate paying more to get a property that is not as nice as ones they underbid on. The capacity of home builders to respond to the price signal is frequently constrained by a myriad of factors and there is a long lead time when they do respond.

The argument Cooper makes rings very true for Australia and is very similar to the one that Adair Turner made in his book titled ”Between debt and the devil”. Cooper’s (and Minsky’s) argument that the pursuit of stability is not a desirable objective and that the system benefits from a modest amount of stress is similar to the argument made by Nassim Taleb in “Antifragility”.

Cooper also discusses the different philosophies that central banks bring to the challenge of managing financial stability. The dominant view is one that focuses on the risk that sees the management of inflation risk as a dominant concern while placing greater trust in the capacity of the market to self correct any instability. The European Central Bank, in contrast, seems to have placed less faith in the market and perhaps been closer to Minsky.

Some quotes from the book will give a sense of the ideas being discussed:

“Through its role in asset price cycles and profit generation, credit formation (borrowing money for either consumption or investment) lies at the heart of the financial market’s fundamental instability”.

“Hyman Minsky said that “stability creates instability” referring to our tendency to build up an unsustainable stock of debt in times of plenty only for that debt to then destroy the times of plenty”

“For a system as inherently unstable as the financial markets, we should not seek to achieve perfect stability; arguably it is this objective that has led to today’s problems. A more sustainable strategy would involve permitting, and at times encouraging, greater short-term cyclicality, using smaller, more-frequent downturns to purge the system of excesses”

“Credit creation is the foundation of the wealth-generation process; it is also the cause of financial instability. We should not let the merits of the former blind us to the risks of the latter.”

I have made some more detailed notes on the book here.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Recently read – “The Moral Economy: Why Good Incentives Are No Substitute For Good Citizens” by Samuel Bowles

The potential for incentives to create bad behaviour has been much discussed in the wake of the GFC while the Financial Services Royal Commission in Australia has provided a fresh set of examples of bankers behaving badly. It is tempting of course to conclude that bankers are just morally corrupt but, for anyone who wants to dig deeper, this book offers an interesting perspective on the role of incentives in the economy.

What I found especially interesting is Bowles account of the history of how the idea that good institutions and a free market based economy could “harness self interest to the public good” has come to dominate so much of current economic and public policy. Building on this foundation, the book examines the ways in which incentives designed around the premise that people are solely motivated by self interest can often be counter-productive; either by crowding out desirable behaviour or by prompting people to behave in ways that are the direct opposite of what was intended.

Many parts of this story are familiar but it was interesting to see how Bowles charted the development of the idea over many centuries and individual contributors. People will no doubt be familiar with Adam Smith’s “Invisible Hand”  but Bowles also introduces other thinkers who contributed to this conceptual framework, Machiavelli and David Hume in particular. The idea is neatly captured in this quote from Hume’s Essays: Moral, Political and Literary (1742) in which he recommended the following maxim

“In contriving any system of government … every man ought to be supposed to be a knave and to have no other end … than private interest. By this interest we must govern him, and, by means of it, make him notwithstanding his insatiable avarice and ambition, cooperate to public good” .

Bowles makes clear that this did not mean that people are in fact solely motivated by self-interest (i.e “knaves”), simply that civic virtue (i.e. creating good people) by itself was not a robust platform for achieving good outcomes. The pursuit of self interest, in contrast, came to be seen as a benign activity that could be harnessed for a higher purpose.

The idea of embracing self-interest is of course anathema to many people but its intellectual appeal is I think obvious.  Australian readers at this point might be reminded of Jack Lang’s maxim “In the race of life, always back self-interest; at least you know it’s trying“. Gordon Gekko’s embrace of the principle that “Greed is good” is the modern expression of this intellectual tradition.

Harnessing self-interest for the common good

Political philosophers had for centuries focused on the question of how to promote civic virtue but their attention turned to finding laws and other public policies that would allow people to pursue their personal objectives, while also inducing them to take account of the effects of their actions on others. The conceptual foundations laid down by David Hume and Adam Smith were progressively built on with competition and well defined property rights coming to be seen as important parts of the solution.

“Good institutions displaced good citizens as the sine qua non of good government. In the economy, prices would do the work of morals”

“Markets thus achieved a kind of moral extraterritoriality … and so avarice, repackaged as self-interest, was tamed, transformed from a moral failing to just another kind of motive”

Free market determined prices were at the heart of the system that allowed the Invisible Hand to work its magic but economists recognised that competition alone was not sufficient for market prices to capture everything that mattered. For the market to arrive at the right (or most complete) price, it was also necessary that economic interactions be governed by “complete contracts” (i.e. contracts that specify the rights and duties of the buyer and seller in all future states of the world).

This is obviously an unrealistic assumption. Apart from the difficulty of imagining all future states of the world, not everything of value can be priced. But all was not lost. Bowles introduces Alfred Marshall and Arthur Pigou who identified, in principle, how a system of taxes and subsidies could be devised that compensated economic actors for benefits their actions conferred on others and made them liable for costs they imposed on others.

These taxes and subsidies are of course not always successful and Bowles offers a taxonomy of reasons why this is so. Incentives can work but not, according to Bowles, if they simplistically assume that the target of the incentive cares only about his or her material gain. To be effective, incentives must account for the fact that people are much more complex, social and moral than is strictly rational from an economic perspective. Bowles devotes a lot of the book to the problem with incentives (both positive and negative, including taxes, fines, subsidies, bonuses etc) which he categorises under three headings:

  1. “Bad News“; incentives send a signal and the tendency is for people to read things into incentives which may not have been intended but prompt them to respond negatively (e.g. does this incentive signal that the other party believes I am not trustworthy or lazy)
  2. Moral Disengagement”; the incentive may create a context in which the subject can distance themselves from the moral consequences of how they respond
  3. “Control Aversion”; an incentive that compromises a subject’s sense of autonomy or pride in the task may reduce their intrinsic motivation to perform the task well

Having noted the ways that incentives can have adverse impacts on behaviour, Bowles notes that civic minded values continue to be an important feature of market based economies and examines why this might be.

“If incentives sometimes crowd out ethical reasoning, the desire to help others, and intrinsic motivations, and if leading thinkers celebrate markets as a morality-free zone, it seems just a short step to Karl Marx’s broadside condemnation of capitalist culture”

One answer is that trading in markets encourages people to trust strangers and that the benefits of trading over time teach people that trust is a valuable commodity (the so called “doux commerce” theory).

While admitting his answer is speculative, Bowles rejects “doux commerce” as the whole answer. He argues that the institutions (property rights, rule of law, etc) developed by liberal societies to protect citizens from worst-case outcomes such as personal injury, loss of property, and other calamities make the consequences of mistakenly trusting a defector much less dire. As a result, the rule of law lowers the bar for how much you would have to know about your partner before trusting him or her, thereby promoting the spread of trusting expectations and hence of trusting behavior in a population.

The “institutional structure” theory is interesting but there is still much in the book worth considering even if you don’t buy his explanation. I have some more detailed notes on the book here.

Lessons for banking in Pixar’s approach to dealing with uncertainty and the risk of failure.

The report on the Prudential Inquiry into the CBA (“CBA Report”) is obviously required reading in banking circles this week. Plenty has been written on the topic already so I will try to restrain myself unless I can find something new to add to the commentary. However, while reading the report, I found myself drawing links to books that I think bankers would find well worth reading. These include Foolproof (by Michael Ip) and “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing (by Michael Mauboussin).

I have put up some notes on Foolproof here and intend to do the same for The Success Equation sometime soon. The focus for today’s post however is a book titled “Creativity, Inc” by Ed Catmull who founded and led Pixar. The overall theme of the book is about developing and sustaining a creative culture but dealing with risk and uncertainty emerges as a big part of this.

What does making movies have to do with banking?

One of the lessons Catmull emphasised was that, notwithstanding Pixar’s success, it was important not to lose sight of the role that random factors play in both success and failure. A quote from Ch 8 illustrates this point;

“… a lot of our success came because we had pure intentions and great talent, and we did a lot of things right, but I also believe that attributing our success solely to our own intelligence without acknowledging the role of accidental events, diminishes us.”

He goes on to describe how success can be a trap for the following reasons;

  • it creates the impression that what you are doing must be right,
  • it tempts you to overlook hidden problems and
  • you may be confusing luck with skill.

There is a discussion in Ch 9 of the kinds of things that can lead you to misunderstand the real nature of both your success and your failure. These include various cognitive biases (such as “confirmation” where you weight information that supports what you believe more than the counter evidence) and mental models we use to simplify the world in which we operate. These are hard wired into us so the best we can do is be aware of how these things can take us off track; that at least puts us ahead of those who blindly follow their mental models and biases.

His answer to building the capacity to adapt to change and respond to setbacks is to trust in people but trust does not mean you trust that people won’t make mistakes. Catmull accepts setbacks and screw ups as an inevitable part of being creative and innovative but trust is demonstrated when you support your people when they do screw up and trust them to find the solution.

This is interesting because the CBA Report indicates that CBA did in fact place a great deal of trust in their executive team and senior leaders, which implies trust alone is not enough. The missing ingredients in CBA’S case were accountability and consequence when the team failed to identify, escalate and resolve problems.

The other interesting line of speculation is whether CBA’s risk culture might have benefited from a deeper reflection on the difference between skill and luck. Maboussin’s book (The Success Equation) is particularly good in the way in which he lays out his framework for making this distinction.

I plan to come back to this topic once I have completed a review of Maboussin’s book but in the interim I can recommend all of the books mentioned in this post.

“Between Debt and the Devil: Money, Credit and Fixing Global Finance” by Adair Turner (2015)

This book is worth reading, if only because it challenges a number of preconceptions that bankers may have about the value of what they do. The book also benefits from the fact that author was the head of the UK Financial Services Authority during the GFC and thus had a unique inside perspective from which to observe what was wrong with the system. Since leaving the FSA, Turner has reflected deeply on the relationship between money, credit and the real economy and argues that, notwithstanding the scale of change flowing from Basel III, more fundamental change is required to avoid a repeat of the cycle of financial crises.

Overview of the book’s main arguments and conclusions

Turner’s core argument is that increasing financial intensity, represented by credit growing faster than nominal GDP, is a recipe for recurring bouts of financial instability.

Turner builds his argument by first considering the conventional wisdom guiding much of bank prudential regulation prior to GFC, which he summarises as follows:

  • Increasing financial activity, innovation and “financial deepening” were beneficial forces to be encouraged
  • More compete and liquid markets were believed to ensure more efficient allocation of capital thereby fostering higher productivity
  • Financial innovations made it easier to provide credit to households and companies thereby enabling more rapid economic growth
  • More sophisticated risk measurement and control meanwhile ensured that the increased complexity of the financial system was not achieved at the expense of stability
  • New systems of originating and distributing credit, rather than holding it on bank balance sheets, were believed to disperse risks into the hands of those best placed to price and manage it

Some elements of Turner’s account of why this conventional wisdom was wrong do not add much to previous analysis of the GFC. He notes, for example, the conflation of the concepts of risk and uncertainty that weakened the risk measurement models the system relied on and concludes that risk based capital requirements should be foregone in favour of a very high leverage ratio requirement. However, in contrast to other commentators who attribute much of the blame to the moral failings of bankers, Turner argues that this is a distraction. While problems with the way that bankers are paid need to be addressed, Turner argues that the fundamental problem is that:

  • modern financial systems left to themselves inevitably create debt in excessive quantities,
  • in particular, the system tends to create debt that does not fund new capital investment but rather the purchase of already existing assets, above all real estate.

Turner argues that the expansion of debt funding the purchase or trading of existing assets drives financial booms and busts, while the debt overhang left over by the boom explains why financial recovery from a financial crisis is typically anaemic and protracted. Much of this analysis seems to be similar to ideas developed by Hyman Minsky while the slow pace of recovery in the aftermath of the GFC reflects a theme that Reinhart and Rogoff have observed in their book titled “This time is different” which analyses financial crises over many centuries.

The answer, Turner argues, is to build a less credit intensive growth model. In pursuing this goal, Turner argues that we also need to understand and respond to the implications of three underlying drivers of increasing credit intensity;

  1. the increasing importance of real estate in modern economies,
  2. increasing inequality, and
  3. global current account imbalances.

Turner covers a lot of ground, and I do not necessarily agree with everything in his book, but I do believe his analysis of what is wrong with the system is worth reading.

Let me start with an argument I do not find compelling; i.e. that risk based capital requirements are unreliable because they are based on a fundamental misunderstanding of the difference between risk (which can be measured) and uncertainty (which cannot):

  • Distinguishing between risk and uncertainty is clearly a fundamental part of understanding risk and Turner is not alone in emphasising its importance
  • I believe that means that we should treat risk based capital requirements with a healthy degree of scepticism and a clear sense of their limitations but that does not render them entirely unreliable especially when we are using them to understand relative differences in risk and to calibrate capital buffers
  • The obvious problem with non-risk based capital requirements is that they create incentives for banks to take higher risk that may eventually offset the supposed increase in soundness attached to the higher capital
  • It may be that Turner discounts this concern because he envisages a lower credit growth/intensity economy delivering less overall systemic risk or because he envisages a more active role for the public sector in what kinds of assets banks lend against; i.e. his support for higher capital may stem mostly from the fact that this reduces the capacity of private banks to generate credit growth

While advocating much higher capital, Turner does seem to part company with M&M purists by expressing doubt that equity investors will be willing to accept deleveraged returns. His reasoning is that returns to equity investments need a certain threshold return to be “equity like” while massively deleveraged ROE still contains downside risks that are unacceptable to debt investors.

Turning to the arguments which I think raise very valid concerns and deserve serious attention.

Notwithstanding my skepticism regarding a leverage ratio as the solution, the arguments he makes about the dangers of excessive credit growth resonate very strongly with what I learned during my banking career. Turner is particularly focussed on the downsides of applying excessive debt to the financing of existing assets, real estate in particular. The argument seems to be similar to (if not based on) the work of Hyman Minsky.

Turner’s description of the amount of money that banks can create as being “infinitely elastic” seems an overstatement to me (especially in the Australian context with the Net Stable Funding Ratio (NSFR) weighing on the capacity to grow the balance sheet) but the general point he is making about the way that credit fuelled demand for a relatively inelastic supply of desirable residential property tends to result in inflated property values with no real social value rings true.

What banks can do about this remains an open question given that resolving the problem with inelastic supply of property is outside their direct control but it is obviously important to understand the dynamics of the market underpinning their largest asset class and it may help them engage more constructively with public policy debates that seek to address the problem.

Turner’s analysis of the downsides of easy monetary policy (the standard response to economic instability) also rings true. He identifies the fact that lower interest rates tend to result in inflated asset values (residential property in particular given its perceived value as a safe asset) which do not address the fundamental problem of over-indebtedness and may serve to increase economic inequality. His discussion of the impact of monetary policy and easy credit on economic inequality is also interesting. The banks providing the credit in the easy money environment may not necessarily be taking undue risk and prudential supervisors have tools to ensure sound lending standards are maintained if they do believe there is a problem with asset quality. What may happen however is that the wealthier segments of society benefit the most under easy money because they have the surplus cash flow to buy property at inflated values while first homebuyers become squeezed out of the market. Again their capacity to address the problem may be limited but Turner’s analysis prompted me to reflect on what increasing economic inequality might mean for bank business models.

In addition to much higher bank capital requirements, Turner’s specific recommendations for moving towards a less credit intensive economy include:

  • Government policies related to urban development and the taxation of real estate
  • Changing tax regimes to reduce the current bias in favour of debt over equity financing (note that Australia is one of the few countries with a dividend imputation system that does reduce the bias to debt over equity)
  • Broader macro prudential powers for central banks, including the power to impose much larger countercyclical capital requirements
  • Tough constraints on the ability of the shadow banking system to create credit and money equivalents
  • Using public policy to produce different allocations of capital than would result from purely market based decisions; in particular, deliberately leaning against the market signal based bias towards real estate and instead favouring other “potentially more socially valuable forms of credit allocation”
  • Recognising that the traditional easy monetary policy response to an economic downturn (or ultra-easy in the case of a financial crisis such as the GFC) is better than doing nothing but comes at a cost of reigniting the growth in private credit that generated the initial problem, creating incentives for risky financial engineering and exacerbating economic inequality via inflating asset prices.

For those who want to dig deeper, I have gone into a bit more detail here on what Turner has to say about the following topics:

  • The way in which inefficient and irrational markets leave the financial system prone to booms and busts
  • The dangers of debt contracts sets out how certain features of these contracts increase the risk of instability and hamper the recovery
  • Too much of the wrong sort of debt describes features of the real estate market that make it different from other asset classes
  • Liberalisation, innovation and the credit cycle on steroids recaps on the philosophy that drove the deregulation of financial markets and what Turner believes to be the fundamental flaws with that approach. In particular his conclusion that the amount of credit created and its allocation is “… too important to be left to bankers…”
  • Private credit and money creation offers an outline of how bank deposits evolved to play an increasing role (the key point being that it was a process of evolution rather than overt public policy design choices)
  • Credit financed speculation discusses the ways in which credit in modern economies tends to be used to finance the purchase of existing assets, in particular real estate, and the issues that flow from this.
  • Inequality, credit and more inequality sets out some ways in which the extension of credit can contribute to increasing economic inequality
  • Capital requirements sets out why Turner believes capital requirements should be significantly increased and why capital requirements (i.e. risk weights) for some asset classes (e.g. real estate) should be be calibrated to reflect the social risk of the activity and not just private risks captured by bank risk models
  • Turner defence against the argument that his proposals are anti-markets and anti-growth.