Interesting review of Nassim Taleb’s “Antifragile” by Scott Alexander on his “Astral Codex Ten” blog
— Read on astralcodexten.substack.com/p/book-review-antifragile
Interesting review of Nassim Taleb’s “Antifragile” by Scott Alexander on his “Astral Codex Ten” blog
— Read on astralcodexten.substack.com/p/book-review-antifragile
The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.
This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.
No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.
It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.
Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.
However, there were two things I found especially interesting in their analysis.
Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.
Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…
…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.
I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.
The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.
If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.
The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.
Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.
Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.
If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.
Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.
Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.
Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.
Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.
Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,
…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.
Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).
Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..
Scenario planning is a way of ordering thoughts about the future, not of predicting it.
The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.
Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future
We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.
Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.
‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’
Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives
These extracts from their book nicely captures the essence of their argument
Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.
the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.
Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.
The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.
The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.
robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.
Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.
I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.
Tony – From The Outside
A recent post offered an overview of a book by John Kay and Mervyn King titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. It is a rich topic and this post covers the underlying drivers that tend to result in radically uncertain outcomes.
Kay and King nominate “reflexivity” as a key driver of radical uncertainty
The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers … although their perspective on the problem and its solution would be very different.
Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.
Kay and King, Chapter 3: Radical Uncertainty is Everywhere”
Radical uncertainty also features in Richard Bookstaber’s book “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. Bookstaber identifies four broad phenomena he argues are endemic to financial crises
Emergent phenomena.
“When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.Non-ergodicity.
“An ergodic process … is one that does not vary with time or experience.
Our world is not ergodic—yet economists treat it as though it is.”Radical uncertainty.
“Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”Computational irreducibility.
Bookstaber, Chapter 2: Being Human
“There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.“
If you want to delve into the detail of why the world can be radically uncertain then Bookstaber arguably offers the more detailed account; albeit one couched in technical language like emergent phenomena, ergodicity and computational irreducibility. In Chapter 10 he lays out the ways in which an agent based modelling approach to the problem of radical uncertainty would need to specify the complexity of the system in a structured way that takes account of the amount of information required to describe the system and the connectedness of its components. Bookstaber also offers examples of emergent phenomena in seemingly simple systems (e.g. Gary Conways’s “Game of Life”) which give rise to surprisingly complex outcomes.
I am not sure if either book makes this point explicitly but I think there is also an underlying theme in which the models that provide the illusion of control over an uncertain future create an incentive to “manage” risk in ways that increases the odds of bad outcomes based on insufficient resilience. That seems to be the clear implication of Kay and King’s discussion of the limits of finance theory (Chapter 17: The World of Finance). They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama but highlight the ways in which it has failed to live up to its promiseI .
We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.
Kay and King, Chapter 17 The World of Finance
I plan to do one more post exploring the ways in which we navigate a world of radical uncertainty.
Tony (From the Outside)
I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.
Kay and King cover a lot of ground but, simply put, their book is about
“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.”
One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.
As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.
The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.
Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.
Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.
They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty
But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.
There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.
“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”
Keynes and Knight lost the debate
Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions.
Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.
There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:
There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.
One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.
Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.
I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.
Tony (From the Outside)
It always pays to make sure you expose yourself to the opposite view. This post looks at some of the arguments for simpler and higher bank capital requirements put forward by Professors Admati and Hellwig. They have published a number of papers and a book on the topic but this post refers chiefly to their book “The Bankers’ New Clothes” and to a paper ‘The Parade of the Banker’s New Clothes Continues: 31 Flawed Claims Debunked”. As I understand it, the key elements of their argument are that:
They identify a variety of ways of dealing with the problem of excessive leverage (controls on bank lending, liquidity requirements and capital requirements) but argue that substantially more common equity is the best solution because:
They concede that there will be some cost associated with unwinding the Too Big To Fail (TBTF) benefit that large banks currently enjoy on both the amount banks can borrow and on the cost of that funding but argue there is still no net cost to society in unwinding this undeserved subsidy. The book, in particular, gets glowing reviews for offering a compelling case for requiring banks to operate with much lower levels of leverage and for pointing out the folly of risk based capital requirements.
There are a number of areas where I find myself in agreement with the points they argue but I can’t make the leap to accept their conclusion that much a higher capital requirement based on a simple leverage ratio calculation is the best solution. I have written this post to help me think through the challenges they offer my beliefs about how banks should be capitalised.
It is useful, I think, to first set out the areas where we (well me at least) might agree in principle with what they say; i.e.
It is less clear however that:
Drawing on some previous posts dealing with these issues (see here, here and here), I propose to focus on the following questions:
Increasing the proportion of equity funding, the authors argue, reduces the risk that shareholders are exposed to because each dollar of equity they have invested
“ will be affected less intensely by the uncertainty associated with the investments”
“when shareholders bear less risk per dollar invested, the rate of return they require is lower”
“Therefore, taking the costs of equity as fixed and independent of the mix of equity and debt involves a fundamental fallacy”.
Banker’sNew Clothes (p101)
The basic facts they set out are not really contentious; the mix of debt and equity does impact required returns. The authors focus on what happens to common equity but changing leverage impacts both debt and equity. This is very clear in the way that rating agencies consider all of the points nominated by the authors when assigning a debt rating. Reduced equity funding will likely lead to a decline in the senior and subordinated debt ratings and higher costs (plus reduced access to funding in absolute dollar terms) while higher equity will be a positive rating factor.
Banks are not immune to these fundamental laws but it is still useful to understand how the outcomes are shaped by the special features of a bank balance sheet. My views here incorporate two of the claims they “debunk” in their paper; specifically
Flawed Claim #4: The key insights from corporate finance about the economics of funding, including those of Modigliani and Miller, are not relevant for banks because banks are different from other companies
Flawed Claim #5: Banks are special because they create money
One of the features that defines a bank is the ability to take deposits. The cost of deposits however tends to be insulated from the effects of leverage. This is a design feature. Bank deposits are a major component of the money supply but need to be insensitive to adverse information about the issuing bank to function as money.
Wanting bank deposits to be information insensitive does not make them so. That is a function of their super senior position in the liability loss hierarchy, supplemented in many, if not most, banking systems by some form of limited deposit insurance (1). I credit a paper by Gary Gorton and George Pennacchi titled “Financial Intermediaries and Liquidity Creation” for crytalising this insight (an earlier post offers a short summary of that paper). Another paper titled “Why Bail-In? And How?” by Joseph Sommer proposes a different rationale for deposits having a super senior position insulated from the risk of insolvency but the implications for the impact of leverage on bank financing costs are much the same.
A large bank also relies on senior unsecured financing. This class of funding is more risky than deposits but still typically investment grade. This again is a design feature. Large banks target an investment grade rating in order to deliver, not only competitive financing costs, but equally (and perhaps more importantly) access to a larger pool of potential funding over a wider range of tenors. The investment grade rating depends of course on there being sufficient loss absorbing capital underwriting that outcome. There is no escaping this law of corporate finance.
The debt rating of large banks is of course also tied up with the issue of banks being treated as Too Big To Fail (TBTF). That is a distortion in the market that needs to be addressed and the answer broadly is more capital though the rating agencies are reasonably agnostic on the form this capital should take in so far as the senior debt rating is concerned. Subject to having enough common equity anchoring the capital structure, more Tier 2 subordinated debt (or Tier 3 bail-in) will work just as well as more common equity for the purposes of reducing the value of implied government support currently embedded in the long term senior debt rating.
At this stage, all of this risk has to go somewhere. On that point I completely agree with Admati and Hellwig. There is no free lunch, the rating/risk of the senior tranches of financing depend on having enough of the right kinds of loss absorbing capital standing before them in the loss hierarchy. Where I part company is on the questions of how much capital is enough and what form it should take.
Admati and Hellwig’s argument for more bank capital has two legs. Firstly, they note that banks are typically much more leveraged than industrial companies and question how can this be given the fundamental law of capital irrelevancy defined by Modigliani and Miller. Secondly, they argue that risk based capital requirements are fundamentally flawed and systematically under estimate how much capital is required.
Admati and Hellwig note that banks have less capital than industrial companies and conclude that this must be a result of the market relying on the assumption that banks will be bailed out. The existence of a government support uplift in the senior debt ratings of large banks is I think beyond debate. There is also broad support (even amongst many bankers) that this is not sound public policy and should ideally be unwound.
It is not obvious however that this wholly explains the difference in observed leverage. Rating agency models are relatively transparent in this regard (S&P in particular) and the additional capital required to achieve a rating uplift equivalent to the existing government support factor would still see banks more leveraged than the typical industrial company. Bank balance sheets do seem to be different from those of industrial companies.
The other leg to their argument is that risk based capital fundamentally under estimates capital requirements. I am broadly sympathetic to the sceptical view on how to use the outputs of risk models and have been for some time. An article I wrote in 2008, for example, challenged the convention of using a probability of default associated with the target debt rating to precisely calibrate the amount of capital a bank required.
The same basic concept of highly precise, high confidence level capital requirements is embedded in the Internal Ratings Based formula and was part of the reason the model results were misinterpreted and misused. Too many people assigned a degree of precision to the models that was not warranted. That does not mean however that risk models are totally useless.
Professors Admati and Hellwig use simple examples (e.g. how does the risk of loss increase if a personal borrower increases leverage on a home loan) to argue that banks need to hold more capital. While the basic principle is correct (all other things equal, leverage does increase risk), the authors’ discussion does not draw much (or possibly any?) attention to the way that requiring a borrower to have equity to support their borrowing reduces a bank’s exposure to movements in the value of the loan collateral.
In the examples presented, any decline in the value of the assets being financed flows through directly to the value of equity, with the inference that this would be true of a bank also. In practice, low risk weights assigned by banks to certain (low default – well secured) pools of lending reflect the existence of borrower’s equity that will absorb the first loss before the value of the loan itself is called into question.
A capital requirement for residential mortgages (typically one of the lowest risk weights and also most significant asset classes) that looks way too low when you note that house prices can easily decline by 10 or 20%, starts to make more sense when you recognise that that there is (or should be) a substantial pool of borrower equity taking the brunt of the initial decline in the value of collateral. The diversity of borrowers is also an important factor in reducing the credit risk of the exposures (though not necessarily the systemic risk of an overall meltdown in the economy). Where that is not the case (and hence the renewed focus on credit origination standards and macro prudential policy in general), then low risk weights are not justified.
I recognise that this argument (incorporating the value of the borrower’s equity) does not work for traded assets where the mark to market change in the value of the asset flows directly to the bank’s equity. It does however work for the kinds of assets on bank balance sheets that typically have very low risk weights (i.e. the primary concern of the leverage ratio advocates). It also does not preclude erring on the side of caution when calculating risk weights so long as the model respects the relative riskiness of the various assets impacting the value of equity.
The discussion of how much capital a bank requires should also recognise the distinction between how much a well managed bank needs and how much a poorly managed bank needs. In a sense, the authors are proposing that all banks, good and bad, should be made to hold the capital required by bad banks. Their focus on highlighting the risks of banking obscures the fact that prudent banking mitigates the downside and that well managed banks are not necessarily consigned to the extremes of risk the authors present as the norm of banking.
While not expressed in exactly that way, the distinction I am drawing is implicit in Basel III’s Total Loss Absorbing Capital (TLAC) requirements now being put in place. TLAC adds a substantial layer of additional loss absorption on top of already substantially strengthened common equity requirements. The base layer of capital can be thought of as what is required for a well managed, well supervised bank with a sound balance sheet and business model. APRA’s “Unquestionably Strong” benchmark for CET1 is a practical example of what this requirement looks like. The problem of course is that all banks argue they are good banks but the risk remains that they are in fact bad banks and we usually don’t find out the difference until it is too late. The higher TLAC requirement provides for this contingency.
I looked at this question in a recent post on the RBNZ’s proposal that virtually all of their TLAC requirement should be comprised of common equity. Admati and Hellwig side with the RBNZ but I believe that a mix of common equity and bail-in capital (along the lines proposed by APRA) is the better solution.
Read my earlier post for the long version, but the essence of my argument is that bail-in capital introduces a better discipline over bank management risk appetite than does holding more common equity. Calibrating common equity requirements to very high standards should always be the foundation of a bank capital structure. Capital buffers in particular should be calibrated to withstand very severe external shocks and to be resilient against some slippage in risk management.
The argument that shareholders’ need to have more “skin in the game” is very valid where the company is undercapitalised. Bail-in capital is not a substitute for getting the basics right. A bank that holds too little common equity, calibrated to an idealised view of both its own capabilities and of the capacity of the external environment to surprise the modellers, will likely find itself suppressing information that does not fit the model. Loss aversion then kicks in and management start taking more risk to win back that which was lost, just as Admati and Hellwig argue.
However, once you have achieved a position that is unquestionably strong, holding more common equity does not necessarily enhance risk management discipline. My experience in banking is that it may in fact be more likely to breed an undesirable sense of complacency or even to create pressure to improve returns. I know that the later is not a a winning strategy in the long run but in the short run the market frequently does not care.
One of the problems I find with a simplistic application of Modigliani & Miller’s (M&M) capital irrelevancy argument is that it does not seem to consider if there is a minimum threshold return for an equity investment below which the investment is no longer sufficiently attractive to investors who are being asked to take first loss positions in a company; i.e. where is the line between debt and equity where a return is simply not high enough to be attractive to equity investors?
Reframing the question in this way suggests that the debate between the authors and the bankers may be more about whether risk based capital adequacy models (including stress testing) can be trusted than it is about the limitations of M&M in the real world.
The author’s solution to prudential supervision of banks is a shock and awe approach to capital that seeks to make the risk of insolvency de minimus for good banks and bad. I have done my best to be open to their arguments and indeed do agree with a number of them. My primary concern with the path they advocate is that I do not believe the extra “skin in the game” generates the risk management benefits they claim.
I see more potential in pursuing a capital structure based on
The authors argue that the authorities would never use these bail-in powers for fear of further destabilising funding markets. This is a valid area of debate but I believe they conflate the risks of imposing losses on bank depositors with the kinds of risks that professional bond investors have traditionally absorbed over many centuries of banking. The golden era in which the TBTF factor shielded bank bondholders from this risk is coming to the end but this broader investment class of bond holders has dealt with defaults by all kinds of borrowers. I am not sure why banks would be special in this regard if countries can default. The key issue is that the investors enter into the contract with the knowledge that they are at risk and are being paid a risk premium commensurate with the downside (which may not be that large if investors judge the banks to be well managed).
This is a complex topic so please let me know if I have missed something fundamental or have otherwise mis-represented Admati and Hellwig’s thesis. In the interim, I remain mostly unconvinced …
Tony
There are a lot of books on the topic of financial crises but this one, written in 2008, stand the test of time. At the very least, it offers a useful introduction to Minsky’s Financial Instability Hypothesis. There is also an interesting discussion of the alternative approaches adopted by central banks to the problem of financial stability.
George Cooper argues that our financial system is inherently unstable and that this tendency is accentuated by a combination of factors
Cooper draws heavily on Hyman Minsky’s Financial Instability Hypothesis (FIH) which he argues offers much better insight into the operation of the financial system than the the Efficient Market Hypothesis (EMH) which tended to be the more influential driver of economic policy in the years preceding the Global Financial Crisis.
Cooper uses these competing theories to explore what makes prices within financial markets move. The EMH maintains that the forces of supply and demand will cause markets to move towards equilibrium and hence that we must look to external forces to understand unexpected shocks and crises. Minsky’s FIH, in contrast, argues that financial markets can be driven by internal forces into cycles of credit expansion and asset inflation followed by credit contraction and asset deflation.
Cooper identifies the following ways in which financial systems can become unstable
The point about some asset markets being prone to instability is particularly pertinent for banks that focus on residential property lending. Classical economic theory holds that increased prices should lead to increased supply and reduced demand but this simple equilibrium model does not necessarily work for property markets. Property buyers more often reason that they need to meet the market because it will only get more expensive if they wait. Many of them will have already seen this happen and regret not meeting the market price previously as they contemplate paying more to get a property that is not as nice as ones they underbid on. The capacity of home builders to respond to the price signal is frequently constrained by a myriad of factors and there is a long lead time when they do respond.
The argument Cooper makes rings very true for Australia and is very similar to the one that Adair Turner made in his book titled ”Between debt and the devil”. Cooper’s (and Minsky’s) argument that the pursuit of stability is not a desirable objective and that the system benefits from a modest amount of stress is similar to the argument made by Nassim Taleb in “Antifragility”.
Cooper also discusses the different philosophies that central banks bring to the challenge of managing financial stability. The dominant view is one that focuses on the risk that sees the management of inflation risk as a dominant concern while placing greater trust in the capacity of the market to self correct any instability. The European Central Bank, in contrast, seems to have placed less faith in the market and perhaps been closer to Minsky.
Some quotes from the book will give a sense of the ideas being discussed:
“Through its role in asset price cycles and profit generation, credit formation (borrowing money for either consumption or investment) lies at the heart of the financial market’s fundamental instability”.
“Hyman Minsky said that “stability creates instability” referring to our tendency to build up an unsustainable stock of debt in times of plenty only for that debt to then destroy the times of plenty”
“For a system as inherently unstable as the financial markets, we should not seek to achieve perfect stability; arguably it is this objective that has led to today’s problems. A more sustainable strategy would involve permitting, and at times encouraging, greater short-term cyclicality, using smaller, more-frequent downturns to purge the system of excesses”
“Credit creation is the foundation of the wealth-generation process; it is also the cause of financial instability. We should not let the merits of the former blind us to the risks of the latter.”
I have made some more detailed notes on the book here.
Tony
“… we vastly underestimate the role of luck in what we see happening around us”
This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.
“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”
Structure wise, Mauboussin:
It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:
“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”
Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”
Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:
Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.
“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”
The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.
Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.
We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.
One process proposed by Mauboussin for distinguishing skill from luck is to:
Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves
One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill. Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.
The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.
The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.
“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”
“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. … skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”
“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”
“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”
Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.
All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.
Tony
The potential for incentives to create bad behaviour has been much discussed in the wake of the GFC while the Financial Services Royal Commission in Australia has provided a fresh set of examples of bankers behaving badly. It is tempting of course to conclude that bankers are just morally corrupt but, for anyone who wants to dig deeper, this book offers an interesting perspective on the role of incentives in the economy.
What I found especially interesting is Bowles account of the history of how the idea that good institutions and a free market based economy could “harness self interest to the public good” has come to dominate so much of current economic and public policy. Building on this foundation, the book examines the ways in which incentives designed around the premise that people are solely motivated by self interest can often be counter-productive; either by crowding out desirable behaviour or by prompting people to behave in ways that are the direct opposite of what was intended.
Many parts of this story are familiar but it was interesting to see how Bowles charted the development of the idea over many centuries and individual contributors. People will no doubt be familiar with Adam Smith’s “Invisible Hand” but Bowles also introduces other thinkers who contributed to this conceptual framework, Machiavelli and David Hume in particular. The idea is neatly captured in this quote from Hume’s Essays: Moral, Political and Literary (1742) in which he recommended the following maxim
“In contriving any system of government … every man ought to be supposed to be a knave and to have no other end … than private interest. By this interest we must govern him, and, by means of it, make him notwithstanding his insatiable avarice and ambition, cooperate to public good” .
Bowles makes clear that this did not mean that people are in fact solely motivated by self-interest (i.e “knaves”), simply that civic virtue (i.e. creating good people) by itself was not a robust platform for achieving good outcomes. The pursuit of self interest, in contrast, came to be seen as a benign activity that could be harnessed for a higher purpose.
The idea of embracing self-interest is of course anathema to many people but its intellectual appeal is I think obvious. Australian readers at this point might be reminded of Jack Lang’s maxim “In the race of life, always back self-interest; at least you know it’s trying“. Gordon Gekko’s embrace of the principle that “Greed is good” is the modern expression of this intellectual tradition.
Political philosophers had for centuries focused on the question of how to promote civic virtue but their attention turned to finding laws and other public policies that would allow people to pursue their personal objectives, while also inducing them to take account of the effects of their actions on others. The conceptual foundations laid down by David Hume and Adam Smith were progressively built on with competition and well defined property rights coming to be seen as important parts of the solution.
“Good institutions displaced good citizens as the sine qua non of good government. In the economy, prices would do the work of morals”
“Markets thus achieved a kind of moral extraterritoriality … and so avarice, repackaged as self-interest, was tamed, transformed from a moral failing to just another kind of motive”
Free market determined prices were at the heart of the system that allowed the Invisible Hand to work its magic but economists recognised that competition alone was not sufficient for market prices to capture everything that mattered. For the market to arrive at the right (or most complete) price, it was also necessary that economic interactions be governed by “complete contracts” (i.e. contracts that specify the rights and duties of the buyer and seller in all future states of the world).
This is obviously an unrealistic assumption. Apart from the difficulty of imagining all future states of the world, not everything of value can be priced. But all was not lost. Bowles introduces Alfred Marshall and Arthur Pigou who identified, in principle, how a system of taxes and subsidies could be devised that compensated economic actors for benefits their actions conferred on others and made them liable for costs they imposed on others.
These taxes and subsidies are of course not always successful and Bowles offers a taxonomy of reasons why this is so. Incentives can work but not, according to Bowles, if they simplistically assume that the target of the incentive cares only about his or her material gain. To be effective, incentives must account for the fact that people are much more complex, social and moral than is strictly rational from an economic perspective. Bowles devotes a lot of the book to the problem with incentives (both positive and negative, including taxes, fines, subsidies, bonuses etc) which he categorises under three headings:
Having noted the ways that incentives can have adverse impacts on behaviour, Bowles notes that civic minded values continue to be an important feature of market based economies and examines why this might be.
“If incentives sometimes crowd out ethical reasoning, the desire to help others, and intrinsic motivations, and if leading thinkers celebrate markets as a morality-free zone, it seems just a short step to Karl Marx’s broadside condemnation of capitalist culture”
One answer is that trading in markets encourages people to trust strangers and that the benefits of trading over time teach people that trust is a valuable commodity (the so called “doux commerce” theory).
While admitting his answer is speculative, Bowles rejects “doux commerce” as the whole answer. He argues that the institutions (property rights, rule of law, etc) developed by liberal societies to protect citizens from worst-case outcomes such as personal injury, loss of property, and other calamities make the consequences of mistakenly trusting a defector much less dire. As a result, the rule of law lowers the bar for how much you would have to know about your partner before trusting him or her, thereby promoting the spread of trusting expectations and hence of trusting behavior in a population.
The “institutional structure” theory is interesting but there is still much in the book worth considering even if you don’t buy his explanation. I have some more detailed notes on the book here.
The report on the Prudential Inquiry into the CBA (“CBA Report”) is obviously required reading in banking circles this week. Plenty has been written on the topic already so I will try to restrain myself unless I can find something new to add to the commentary. However, while reading the report, I found myself drawing links to books that I think bankers would find well worth reading. These include Foolproof (by Michael Ip) and “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing (by Michael Mauboussin).
I have put up some notes on Foolproof here and intend to do the same for The Success Equation sometime soon. The focus for today’s post however is a book titled “Creativity, Inc” by Ed Catmull who founded and led Pixar. The overall theme of the book is about developing and sustaining a creative culture but dealing with risk and uncertainty emerges as a big part of this.
One of the lessons Catmull emphasised was that, notwithstanding Pixar’s success, it was important not to lose sight of the role that random factors play in both success and failure. A quote from Ch 8 illustrates this point;
“… a lot of our success came because we had pure intentions and great talent, and we did a lot of things right, but I also believe that attributing our success solely to our own intelligence without acknowledging the role of accidental events, diminishes us.”
He goes on to describe how success can be a trap for the following reasons;
There is a discussion in Ch 9 of the kinds of things that can lead you to misunderstand the real nature of both your success and your failure. These include various cognitive biases (such as “confirmation” where you weight information that supports what you believe more than the counter evidence) and mental models we use to simplify the world in which we operate. These are hard wired into us so the best we can do is be aware of how these things can take us off track; that at least puts us ahead of those who blindly follow their mental models and biases.
His answer to building the capacity to adapt to change and respond to setbacks is to trust in people but trust does not mean you trust that people won’t make mistakes. Catmull accepts setbacks and screw ups as an inevitable part of being creative and innovative but trust is demonstrated when you support your people when they do screw up and trust them to find the solution.
This is interesting because the CBA Report indicates that CBA did in fact place a great deal of trust in their executive team and senior leaders, which implies trust alone is not enough. The missing ingredients in CBA’S case were accountability and consequence when the team failed to identify, escalate and resolve problems.
The other interesting line of speculation is whether CBA’s risk culture might have benefited from a deeper reflection on the difference between skill and luck. Maboussin’s book (The Success Equation) is particularly good in the way in which he lays out his framework for making this distinction.
I plan to come back to this topic once I have completed a review of Maboussin’s book but in the interim I can recommend all of the books mentioned in this post.
This book is worth reading, if only because it challenges a number of preconceptions that bankers may have about the value of what they do. The book also benefits from the fact that author was the head of the UK Financial Services Authority during the GFC and thus had a unique inside perspective from which to observe what was wrong with the system. Since leaving the FSA, Turner has reflected deeply on the relationship between money, credit and the real economy and argues that, notwithstanding the scale of change flowing from Basel III, more fundamental change is required to avoid a repeat of the cycle of financial crises.
Turner’s core argument is that increasing financial intensity, represented by credit growing faster than nominal GDP, is a recipe for recurring bouts of financial instability.
Turner builds his argument by first considering the conventional wisdom guiding much of bank prudential regulation prior to GFC, which he summarises as follows:
Some elements of Turner’s account of why this conventional wisdom was wrong do not add much to previous analysis of the GFC. He notes, for example, the conflation of the concepts of risk and uncertainty that weakened the risk measurement models the system relied on and concludes that risk based capital requirements should be foregone in favour of a very high leverage ratio requirement. However, in contrast to other commentators who attribute much of the blame to the moral failings of bankers, Turner argues that this is a distraction. While problems with the way that bankers are paid need to be addressed, Turner argues that the fundamental problem is that:
Turner argues that the expansion of debt funding the purchase or trading of existing assets drives financial booms and busts, while the debt overhang left over by the boom explains why financial recovery from a financial crisis is typically anaemic and protracted. Much of this analysis seems to be similar to ideas developed by Hyman Minsky while the slow pace of recovery in the aftermath of the GFC reflects a theme that Reinhart and Rogoff have observed in their book titled “This time is different” which analyses financial crises over many centuries.
The answer, Turner argues, is to build a less credit intensive growth model. In pursuing this goal, Turner argues that we also need to understand and respond to the implications of three underlying drivers of increasing credit intensity;
Turner covers a lot of ground, and I do not necessarily agree with everything in his book, but I do believe his analysis of what is wrong with the system is worth reading.
Let me start with an argument I do not find compelling; i.e. that risk based capital requirements are unreliable because they are based on a fundamental misunderstanding of the difference between risk (which can be measured) and uncertainty (which cannot):
While advocating much higher capital, Turner does seem to part company with M&M purists by expressing doubt that equity investors will be willing to accept deleveraged returns. His reasoning is that returns to equity investments need a certain threshold return to be “equity like” while massively deleveraged ROE still contains downside risks that are unacceptable to debt investors.
Notwithstanding my skepticism regarding a leverage ratio as the solution, the arguments he makes about the dangers of excessive credit growth resonate very strongly with what I learned during my banking career. Turner is particularly focussed on the downsides of applying excessive debt to the financing of existing assets, real estate in particular. The argument seems to be similar to (if not based on) the work of Hyman Minsky.
Turner’s description of the amount of money that banks can create as being “infinitely elastic” seems an overstatement to me (especially in the Australian context with the Net Stable Funding Ratio (NSFR) weighing on the capacity to grow the balance sheet) but the general point he is making about the way that credit fuelled demand for a relatively inelastic supply of desirable residential property tends to result in inflated property values with no real social value rings true.
What banks can do about this remains an open question given that resolving the problem with inelastic supply of property is outside their direct control but it is obviously important to understand the dynamics of the market underpinning their largest asset class and it may help them engage more constructively with public policy debates that seek to address the problem.
Turner’s analysis of the downsides of easy monetary policy (the standard response to economic instability) also rings true. He identifies the fact that lower interest rates tend to result in inflated asset values (residential property in particular given its perceived value as a safe asset) which do not address the fundamental problem of over-indebtedness and may serve to increase economic inequality. His discussion of the impact of monetary policy and easy credit on economic inequality is also interesting. The banks providing the credit in the easy money environment may not necessarily be taking undue risk and prudential supervisors have tools to ensure sound lending standards are maintained if they do believe there is a problem with asset quality. What may happen however is that the wealthier segments of society benefit the most under easy money because they have the surplus cash flow to buy property at inflated values while first homebuyers become squeezed out of the market. Again their capacity to address the problem may be limited but Turner’s analysis prompted me to reflect on what increasing economic inequality might mean for bank business models.
In addition to much higher bank capital requirements, Turner’s specific recommendations for moving towards a less credit intensive economy include:
For those who want to dig deeper, I have gone into a bit more detail here on what Turner has to say about the following topics: