Every bank needs a cyclical capital buffer

This post sets out a case for a bank choosing to incorporate a discretionary Cyclical Buffer (CyB) into its Internal Capital Adequacy Assessment Process (ICAAP). The size of the buffer is a risk appetite choice each individual bank must make. The example I have used to illustrate the idea is calibrated to absorb the expected impact of an economic downturn that is severe but not necessarily a financial crisis style event. My objective is to illustrate the ways in which incorporating a Cyclical Buffer in the target capital structure offers:

  • an intuitive connection between a bank’s aggregate risk appetite and its target capital structure;
  • a means of more clearly defining the point where losses transition from expected to unexpected; and
  • a mechanism that reduces both the pro cyclicality of a risk sensitive capital regime and the tendency for the transition to unexpected losses to trigger a loss of confidence in the bank.

The value of improved clarity, coherence and consistency in the risk appetite settings is I think reasonably self evident. The need for greater clarity in the distinction between expected and unexpected loss perhaps less so. The value of this Cyclical Buffer proposal ultimately depends on its capacity to enhance the resilience of the capital adequacy regime in the face of economic downturns without compromising its risk sensitivity.

There are no absolutes when we deal with what happens under stress but I believe a Cyclical Buffer such as is outlined in this post also has the potential to help mitigate the risk of loss of confidence in the bank when losses are no longer part of what stakeholders expect but have moved into the domain of uncertainty. I am not suggesting that this would solve the problem of financial crisis. I am suggesting that it is a relatively simple enhancement to a bank’s ICAAP that has the potential to make banks more resilient (and transparent) with no obvious downsides.

Capital 101

In Capital 101, we learn that capital is meant to cover “unexpected loss” and that there is a neat division between expected and unexpected loss. The extract below from an early BCBS publication sets out the standard explanation …

Expected and unexpected credit loss

Figure 1 – Expected and Unexpected Loss

The BCBS publication from which this image is sourced explained that

“While it is never possible to know in advance the losses a bank will suffer in a particular year, a bank can forecast the average level of credit losses it can reasonably expect to experience. These losses are referred to as Expected Losses (EL) ….”

One of the functions of bank capital is to provide a buffer to protect a bank’s debt holders against peak losses that exceed expected levels… Losses above expected levels are usually referred to as Unexpected Losses (UL) – institutions know they will occur now and then, but they cannot know in advance their timing or severity….”

“An Explanatory Note on the Basel II IRB Risk Weight Functions” BCBS July 2005

There was a time when the Internal Ratings Based approach, combining some elegant theory and relatively simple math, seemed to have all the answers

  • A simple intuitive division between expected and unexpected loss
  • Allowing expected loss to be quantified and directly covered by risk margins in pricing while the required return on unexpected loss could be assigned to the cost of equity
  • A precise relationship between expected and unexpected loss, defined by the statistical parameters of the assumed loss distribution
  • The capacity to “control” the risk of unexpected loss by applying seemingly unquestionably strong confidence levels (i.e. typically 1:1000 years plus) to the measurement of target capital requirements
  • It even seemed to offer a means of neatly calibrating the capital requirement to the probability of default of your target debt rating (e.g. a AA senior debt rating with a 5bp probability of default = a 99.95% confidence level; QED)

If only it was that simple … but expected loss is still a good place to start

In practice, the inherently cyclical nature of banking means that the line between expected and unexpected loss is not always as simple or clear as represented above. It would be tempting to believe that the transition to expected loan loss accounting will bring greater transparency to this question but I doubt that is the case. Regulatory Expected Loss (REL) is another possible candidate but again I believe it falls short of what would be desirable for drawing the line that signals where we are increasingly likely to have crossed from the domain of the expected to the unexpected.

The problem (from a capital adequacy perspective) with both IFRS9 and REL is that the “expected” value still depends on the state of the credit cycle at the time we are taking its measure. REL incorporates a Downturn measure of Loss Given Default (DLGD) but the other inputs (Probability of Default and Exposure at Default) are average values taken across a cycle, not the values we expect to experience at the peak of the cycle downturn.

We typically don’t know exactly when the credit cycle will turn down, or by how much and how long, but we can reasonably expect that it will turn down at some time in the future. Notwithstanding the “Great Moderation” thesis that gained currency prior to the GFC, the long run of history suggests that it is dangerous to bet against the probability of a severe downturn occurring once every 15 to 25 years. Incorporating a measure into the Internal Capital Adequacy Process (ICAAP) that captures this aspect of expected loss provides a useful reference point and a potential trigger for reviewing why the capital decline has exceeded expectations.

Uncertainty is by definition not measurable

One of the problems with advanced model based approaches like IRB is that banks experience large value losses much more frequently than the models suggest they should. As a consequence, the seemingly high margins of safety implied by 1:1000 year plus confidence levels in the modelling do not appear to live up to their promise.

A better way of dealing with uncertainty

One of the core principles underpinning this proposal is that the boundary between risk (which can be measured with reasonable accuracy) and uncertainty (which can not be measured with any degree of precision) probably lies around the 1:25 year confidence level (what we usually label a “severe recession). I recognise that reasonable people might adopt a more conservative stance arguing that the zone of validity of credit risk models caps out at 1:15 or 1:20 confidence levels but I am reasonably confident that 1:25 defines the upper boundary of where credit risk models tend to find their limits. Each bank can makes its own call on this aspect of risk calibration.

Inside this zone of validity, credit risk models coupled with stress testing and sensitivity analysis can be applied to generate a reasonably useful estimate of expected losses and capital impacts. There is of course no guarantee that the impacts will not exceed the estimate, that is why we have capital. The estimate does however define the rough limits of what we can claim to “know” about our risk profile.

The “expected versus unexpected” distinction is all a bit abstract – why does it matter?

Downturn loss is part of the risk reward equation of banking and manageable, especially if the cost of expected downturn losses has already been built into credit risk spreads. Managing the risk is easier however if a bank’s risk appetite statement has a clear sense of:

  • exactly what kind of expected downturn loss is consistent with the specific types of credit risk exposure the risk appetite otherwise allows (i.e. not just the current exposure but also any higher level of exposure that is consistent with credit risk appetite) and
  • the impact this would be expected to have on capital adequacy.

This type of analysis is done under the general heading of stress testing for both credit risk and capital adequacy but I have not often seen evidence that banks are translating the analysis and insight into a specific buffer assigned the task of absorbing expected downturn losses and the associated negative impact on capital adequacy. The Cyclical Buffer I have outlined in this post offers a means of more closely integrating the credit risk management framework and the Internal Capital Adequacy Assessment Process (ICAAP).

What gets you into trouble …

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so”

Commonly, possibly mistakenly, attributed to Mark Twain

This saying captures an important truth about the financial system. Some degree of volatility is part and parcel of the system but one of the key ingredients in a financial crisis or panic is when participants in the system are suddenly forced to change their view of what is safe and what is not.

This is one of the reasons why I believe that a more transparent framework for tracking the transition from expected to truly unexpected outcomes can add to the resilience of the financial system. Capital declines that have been pre-positioned in the eyes of key stakeholders as part and parcel of the bank risk reward equation are less likely to be a cause for concern or trigger for panic.

The equity and debt markets will still revise their valuations in response but the debt markets will have less reason to question the fundamental soundness of the bank if the capital decline lies within the pre-positioned operating parameters defined by the target cyclical buffer. This will be especially so to the extent that the Capital Conservation Buffer provides substantial layers of additional buffer to absorb the uncertainty and buy time to respond to it.

Calibrating the size of the Cyclical Buffer

Incorporating a Cyclical Buffer does not necessarily mean that a bank needs to hold more capital. It is likely to be sufficient to simply partition a set amount of capital that bank management believes will absorb the expected impact of a cyclical downturn. The remaining buffer capital over minimum requirements exists to absorb the uncertainty and ensure that confidence sensitive liabilities are well insulated from the impacts of that uncertainty.

But first we have to define what we mean by “THE CYCLE”. This is a term frequently employed in the discussion of bank capital requirements but open to a wide range of interpretation.

A useful start to calibrating the size of this cyclical buffer is to distinguish:

  • An economic or business cycle; which seems to be associated with moderate severity, short duration downturns occurring once every 7 to 10 years, and
  • The “financial cycle” (to use a term suggested by Claudio Borio) where we expect to observe downturns of greater severity and duration but lower frequency (say once every 25 years or more).

Every bank makes its own decision on risk appetite but, given these two choices, mine would calibrated to, and hence resilient against, the less frequent but more severe and longer duration downturns associated with the financial cycle.

There is of course another layer of severity associated with a financial crisis. This poses an interesting challenge because it begs the question whether a financial crisis is the result of some extreme external shock or due to failures of risk management that allowed an endogenous build up of risk in the banking system. This kind of loss is I believe the domain of the Capital Conservation Buffer (CCB).

There is no question that banks must be resilient in the face of a financial crisis but my view is that this is a not something that should be considered an expected cost of banking.

Incorporating a cyclical buffer into the capital structure for an Australian D-SIB

Figure 2 below sets out an example of how this might work for an Australian D-SIB that has adopted APRA’s 10.5% CET1 “Unquestionably Strong”: benchmark as the basis of its target capital structure. These banks have a substantial layer of CET1 capital that is nominally surplus to the formal prudential requirements but in practice is not if the bank is to be considered “unquestionably strong” as defined by APRA. The capacity to weather a cyclical downturn might be implicit in the “Unquestionably Strong” benchmark but it is not transparent. In particular, it is not obvious how much CET1 can decline under a cyclical downturn while a bank is still deemed to be “Unquestionably Strong”.

Figure 2 – Incorporating a cyclical buffer into the target capital structure

The proposed Cyclical Buffer sits on top of the Capital Conservation Buffer and would be calibrated to absorb the increase in losses, and associated drawdowns on capital, expected to be experienced in the event of severe economic downturn. Exactly how severe is to some extent a question of risk appetite, unless of course regulators mandate a capital target that delivers a higher level of soundness than the bank would have chosen of its own volition.

In the example laid out in Figure 2, I have drawn the limit of risk appetite at the threshold of the Capital Conservation Buffer. This would be an 8% CET1 ratio for an Australian D-SIB but there is no fundamental reason for drawing the lone on risk appetite at this threshold. Each bank has the choice of tolerating some level of incursion into the CCB (hence the dotted line extension of risk appetite). What matters is to have a clear line beyond which higher losses and lower capital ratios indicate that something truly unexpected is driving the outcomes being observed.

What about the prudential Counter-Cyclical Capital Buffer?

I have deliberately avoided using the term”counter” cyclical in this proposal to distinguish this bank controlled Cyclical Buffer (CyB) from its prudential counterpart, the “Counter Cyclical Buffer” (CCyB), introduced under Basel III. My proposal is similar in concept to the variations on the CCyB being developed by the Bank of England and the Canadian OFSI. The RBNZ is also considering something similar in its review of “What counts as capital?” where it has proposed that the CCyB should have a positive value (indicatively set at 1.5%) at all times except following a financial crisis (see para 105 -112 of the Review Paper for more detail).

My proposal is also differentiated from its prudential counter part by the way in which the calibration of the size of the bank Cyclical Buffer offers a way for credit risk appetite to be more formally integrated with the Internal Capital Adequacy Process (ICAAP) that sets the overall target capital structure.

Summing up

  • Incorporating a Cyclical Buffer into the target capital structure offers a means of more closely integrating the risk exposure and capital adequacy elements of a bank’s risk appetite
  • A breach of the Cyclical Buffer creates a natural trigger point for reviewing whether the unexpected outcomes was due to an unexpectedly large external shock or was the result of credit exposure being riskier than expected or some combination of the two
  • The role of the Capital Conservation Buffer in absorbing the uncertainty associated with risk appetite settings is much clearer if management of cyclical expected loss is assigned to the Cyclical Buffer

What am I missing …

Tony

Will Expected Loss loan provisioning reduce pro cyclicality?

I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had  to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.

Borio’s position is that:

  • Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
  • But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
  • So there is a need to develop what he calls “capital filters” (a generic term encompassing   capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.

There are two ways to respond to Claudio Borio’s observations on this topic:

  1. One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
  2. The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions

Will ECL reduce the risk of pro cyclicality?

It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.

The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.

This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.

The solution to pro cyclicality lies in capital not expected loss

So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:

  • let accounting standards do what they do, don’t try to make them solve prudential problems;
  • construct a capital adequacy solution that complements the accounting based measurement of capital and profits.

Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.

In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Minsky’s Financial Instability Hypothesis – Applications in Stress Testing?

One of the issues that we keep coming back to in stress testing is whether, 1) the financial system is inherently prone to instability and crisis or 2) the system naturally tends towards equilibrium and instability is due to external shocks. Any stress scenario that we design, or that we are asked to model, will fall somewhere along this spectrum though I suspect most scenarios tend to be based on exogenous shocks. This touches on a long standing area of economic debate and hence not something that we can expect to resolve any time soon. I think it however useful to consider the question when conducting stress testing and evaluate the outcomes.

From roughly the early 1980’s until the GFC in 2008, the dominant economic paradigm has arguably been that market forces, coupled with monetary and fiscal policy built on a sound understanding of how the economy works, meant that the business cycle was dead and that the primary challenge of policy was to engineer efficient capital allocations that maximised growth. The GFC obviously highlighted shortcomings with the conventional economic approach and drew attention to an alternative approach developed by Hyman Minsky which he labelled the Financial Instability Hypothesis.

Minsky’s Financial Instability Hypothesis (FIH)

Minsky focused on borrowing and lending with varying margins of safety as a fundamental property of all capitalist economies and identified three forms

  • “Hedge” financing under which cash flow covers the repayment of principal and interest
  • “Speculative” financing under which cash flow covers interest but the principal repayments must be continually refinanced
  • “Ponzi” financing under which cash flow is insufficient to cover either interest or principal and the borrower is betting that appreciation in the value of the asset being financed will be sufficient to repay loan principal plus capitalised interest and generate a profit

The terms that Minsky uses do not strictly conform to modern usage but his basic idea is clear; increasingly speculative lending tends to be associated with increasing fragility of borrowers and the financial system as a whole. Ponzi financing is particularly problematic because the system is vulnerable to external shocks that can result in restricted access to finance or which cause asset devaluation cycle as borrowers to sell their assets in order to reduce their leverage. The downward pressure on assets prices associated with the deleveraging process then puts further pressure on the capacity to repay the loans and so on.

The term “Minsky moment” has been used to describe the inflexion point where debt levels become unsustainable and asset prices fall as investors seek to deleverage. Investor psychology is obviously one of the primary drivers in this three stage cycle; investor optimism translates to a willingness to borrow and to pay more for assets, the higher asset valuations in turn allow lenders to lend more against set loan to valuation caps. Lenders can also be caught up in the mood of optimism and take on more risk (e.g. via higher Loan Valuation Ratio limits or higher debt service coverage ratios). Minsky stated that “the fundamental assertion of the financial instability hypothesis is that the financial structure evolves from being robust to being fragile over a period in which the economy does well” (Financial Crises: Systemic or Idiosyncratic by Hyman Minsky, April 1991, p16).

It should also be noted that a Minsky moment does not require an external shock, a simple change in investor outlook or risk tolerance could be sufficient to trigger the reversal. Minsky observed that the tendency of the endogenous process he described to lead to systemic fragility and instability is constrained by institutions and interventions that he described as “thwarting systems” (“Market Processes and Thwarting Systems” by P. Ferri and H. Minsky, November 1991, p2). However Minsky’s FIH also assumes that there is a longer term cycle in which these constraints are gradually wound back allowing more and more risk to accumulate in the system over successive business cycles.

What Minsky describes is similar to the idea of a long term “financial cycle” (25 years plus) being distinct from the shorter duration “business cycle” (typically 7-10 years) – refer this post “The financial cycle and macroeconomics: What have we learnt?” for more detail. An important feature of this longer term financial cycle is a process that gradually transforms the business institutions, decision-making conventions, and structures of market governance, including regulation, which contribute to the stability of capitalist economies.

The transformation process can be broken down into two components

  1. winding back of regulation and
  2. increased risk taking

which in combination increase both the supply of and demand for risk. The process of regulatory relaxation can take a number of forms:

  • One dimension is regulatory capture; whereby the institutions designed to regulate and reduce excessive risk-taking are captured and weakened
  • A second dimension is regulatory relapse; reduced regulation may be justified on the rationale that things are changed and regulation is no longer needed but there is often an ideological foundation typically based on economic theory (e.g. the “Great Moderation” or market discipline underpinning self-regulation).
  • A third dimension is regulatory escape; whereby the supply of risk is increased through financial innovation that escapes the regulatory net because the new financial products and practices were not conceived of when existing regulation was written.

Borrowers also take on more risk for a variety of reasons:

  • First, financial innovation provides new products that allow borrowers to take on more debt or which embed higher leverage inside the same nominal value of debt.
  • Second, market participants are also subject to gradual memory loss that increases their willingness to take on risk

The changing taste for risk is also evident in cultural developments which can help explain the propensity for investors to buy shares or property. A greater proportion of the population currently invest in shares than was the case for their parents or grandparents. These individual investors are actively engaged in share investing in a way that would be unimaginable for the generations that preceded them. Owning your own home and ideally an investment property as well is an important objective for many Australians but less important in say Germany.

These changes in risk appetite can also weaken market discipline based constraints against excessive risk-taking. A book titled “The Origin of Financial Crises” by George Cooper (April 2008) is worth reading if you are interested in the ideas outlined above. A collection of Minsky’s papers can also be found here  if you are interested in exploring his thinking more deeply.

I have been doing a bit of research lately both on the question of what exactly does Expected Loss “expect” and on the ways in which cycle downturns are defined. I may be missing something, but I find this distinction between endogenous and exogenous factors largely missing from the discussion papers that I have found so far and from stress testing itself. I would greatly appreciate some suggestions if anyone has come across any good material on the issue.

Tony

Worth Reading “The Money Formula” by Paul Wilmott and David Orrell.

The full title of this book, co-written by Paul Wilmott and David Orrell, is “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took over the Markets“. There are plenty of critiques of modelling and quantitative finance by outsiders throwing rocks but Wilmott is a quant and brings an insider’s technical knowledge to the question of what these tools can do, can’t do and perhaps most importantly should not be used to do. Consequently, the book offers a more nuanced perspective on the strengths and limitations of quantitative finance as opposed to the let’s scrap the whole thing school of thought. I have made some more detailed notes which follow the structure of the book but this post focuses on a couple of ideas I found especially interesting or useful.

I am not a quant so my comments should be read with that in mind but the core idea I took away is that, much as quants would want it otherwise, markets are not determined by fundamental laws, deterministic or probabilistic that allow risk to be measured with precision. These ideas work reasonably well within their “zone of validity” but a more complete answer (or model) has to recognise where the zones stop and uncertainty rules.  Wilmott and Orrell argue market outcomes are better thought of as the “emergent result of complex transactions”. The role of money in these emergent results is especially important, as is the capacity of models themselves to materially reshape the risk of the markets they are attempting to measure.

The Role of Money

Some quotes I have drawn from Chapter 8, will let the authors speak for themselves on the role of money …

Consider …. the nature of money. Standard economic definitions of money concentrate on its roles as a “medium of exchange,” a “store of value,” and a “unit of account.” Economists such as Paul Samuelson have focused in particular on the first, defining money as “anything that serves as a commonly accepted medium of exchange.” … ” Money is therefore not something important in itself; it is only a kind of token. The overall picture is of the economy as a giant barter system, with money acting as an inert facilitator.” (emphasis added)

“However … money is far more interesting than that, and actually harbors its own kind of lively, dualistic properties. In particular, it merges two things, number and value, which have very different properties:number lives in the abstract, virtual world of mathematics, while valued objects live in the real world. But money seems to be an active part of the system. So ignoring it misses important relationships. The tension between these contradictory aspects is what gives money its powerful and paradoxical qualities.” (Emphasis added)

The real and the virtual become blurred, in physics or in finance. And just as Newtonian theories break down in physics, so our Newtonian approach to money breaks down in economics. In particular, one consequence is that we have tended to take debt less seriously than we should. (emphasis added)

Instead of facing up to the intrinsically uncertain nature of money and the economy, relaxing some of those tidy assumptions, accepting that markets have emergent properties that resist reduction to simple laws, and building a new and more realistic theory of economics, quants instead glommed on to the idea that, when a system is unpredictable, you can just switch to making probabilistic predictions.” (emphasis added)

“The efficient market hypothesis, for example, was based on the mechanical analogy that markets are stable and perturbed randomly by the actions of atomistic individuals. This led to probabilistic risk-analysis tools such as VaR. However, in reality, the “atoms” are not independent, but are closely linked … The result is the non-equilibrium behaviour … observed in real markets. Markets are unpredictable not because they are efficient, but because of a financial version of the uncertainty principle.” (emphasis added)

 The Role of Models

Wilmott & Orrell devote a lot of attention to the ways in which models no longer just describe, but start to influence, the markets being modelled mostly by encouraging people to take on more risk based in part on a false sense of security …

“Because of the bankers’ insistence on treating complex finance as a university end-of-term exam in probability theory, many of the risks in the system are hidden. And when risks are hidden, one is led into a false sense of security. More risk is taken so that when the inevitable happens, it is worse than it could have been. Eventually the probabilities break down, disastrous events become correlated, the cascade of dominoes is triggered, and we have systemic risk …. None of this would matter if the numbers were small … but the numbers are huge” (Chapter 10 – emphasis added)

They see High Frequency Trading as the area likely to give rise to a future systemic crisis but also make a broader point about the tension between efficiency and resilience..

“With complex systems, there is usually a trade-off between efficiency and robustness …. Introducing friction into the system – for example by putting regulatory brakes on HFT – will slow the markets, but also make them more transparent and reliable. If we want a more robust and resilient system then we probably need to agree to forego some efficiency” (Chapter 10 – emphasis added)

The Laws of Finance

Wilmott and Orrell note the extent to which finance has attempted to identify laws which are analogous to the laws of physics and the ways in which these “laws” have proved to be more of a rough guide.

 “… the “law of supply and demand” …states that the market for a particular product has a certain supply, which tends to increase as the price goes up (more suppliers enter the market). There is also a certain demand for the product, which increases as the price goes down.”

“… while the supply and demand picture might capture a general fuzzy principle, it is far from being a law. For one thing, there is no such thing as a stable “demand” that we can measure independently –there are only transactions.”

“Also, the desire for a product is not independent of supply, or other factors, so it isn’t possible to think of supply and demand as two separate lines. Part of the attraction of luxury goods –or for that matter more basic things, such as housing –is exactly that their supply is limited. And when their price goes up, they are often perceived as more desirable, not less.” (emphasis added)

This example is relevant for banking systems (such as Australia) where residential mortgage lending dominates the balance sheets of the banks. Even more so given that public debate of the risk associated with housing seems often to be predicated on the economics 101 version of the laws of supply and demand.

The Power (and Danger) of Ideas

A recurring theme throughout the book is the ways in which economists and quants have borrowed ideas from physics without recognising the limitations of the analogies and assumptions they have relied on to do so. Wilmott and Orrell credit Sir Issac Newton as one of the inspirations behind Adam Smith’s idea of the “Invisible Hand” co-ordinating  the self interested actions of individuals for the good of society. When the quantum revolution saw physics embrace a probabilistic approach, economists followed.

I don’t think Wilmott and Orrell make this point directly but a recurring thought reading the book was the power of ideas to not just interpret the underlying reality but also to shape the way the economy and society develops not always for the better.

  • Economic laws that drive markets towards equilibrium as their natural state
  • The “invisible hand” operating in markets to reconcile individual self interest with optimal outcomes for society as a whole
  • The Efficient Market Hypothesis as an explanation for why markets are unpredictable

These ideas have widely influenced quantitative finance in a variety of domains and they all contribute useful insights; the key is to not lose sight of their zone of validity.

…. Finance … took exactly the wrong lesson from the quantum revolution. It held on to its Newtonian, mechanistic, symmetric picture of an intrinsically stable economy guided to equilibrium by Adam Smith’s invisible hand. But it adopted the probabilistic mathematics of stochastic calculus.” (emphasis added) Chapter 8

Where to from here?

It should be obvious by now that the authors are arguing that risk and reward cannot be reduced to hard numbers in the ways that physics has used similar principles and tools to generate practical insights into how the world works. Applying a bit of simple math in finance seems to open up the door to getting some control over an unpredictable world and, even better, to pursue optimisation strategies that allow the cognoscenti to optimise the balance between risk and reward. There is room for more complex math as well for those so inclined but the book sides with the increasingly widely held views that simple math is enough to get you into trouble and further complexity is best avoided if possible.

Wilmott and Orrell highlight mathematical biology in general and a book by Jim Murray on the topic as a source for better ways to approach many of the more difficult modelling challenges in finance and economics. They start by listing a series of phenomena in biological models that seem to be useful analogues for what happens in financial markets. They concede that a number of models used in mathematical biology that are almost all “toy” models. None of these models offer precise or determined outcomes but all can be used to explain what is happening in nature and offer insights into solutions for problems like disease control, epidemics, conservation etc.

The approach they advocate seems have a lot in common with the Agent Based Modelling approach that Andrew Haldane references (see his paper on “Tails of the Unexpected“) and that is the focus of Bookstabber’s book (“The End of Theory”).

In their words …

“Embrace the fact that the models are toy, and learn to work within any limitations.”

Focus more attention on measuring and managing resulting model risk, and less time on complicated new products.”

“… only by remaining both skeptical and agile can we learn. Keep your models simple, but remember they are just things you made up, and be ready to update them as new information comes in.”

I fear I have not done the book justice but I got a lot out of it and can recommend it highly.

 

 

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

Lessons for banking in Pixar’s approach to dealing with uncertainty and the risk of failure.

The report on the Prudential Inquiry into the CBA (“CBA Report”) is obviously required reading in banking circles this week. Plenty has been written on the topic already so I will try to restrain myself unless I can find something new to add to the commentary. However, while reading the report, I found myself drawing links to books that I think bankers would find well worth reading. These include Foolproof (by Michael Ip) and “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing (by Michael Mauboussin).

I have put up some notes on Foolproof here and intend to do the same for The Success Equation sometime soon. The focus for today’s post however is a book titled “Creativity, Inc” by Ed Catmull who founded and led Pixar. The overall theme of the book is about developing and sustaining a creative culture but dealing with risk and uncertainty emerges as a big part of this.

What does making movies have to do with banking?

One of the lessons Catmull emphasised was that, notwithstanding Pixar’s success, it was important not to lose sight of the role that random factors play in both success and failure. A quote from Ch 8 illustrates this point;

“… a lot of our success came because we had pure intentions and great talent, and we did a lot of things right, but I also believe that attributing our success solely to our own intelligence without acknowledging the role of accidental events, diminishes us.”

He goes on to describe how success can be a trap for the following reasons;

  • it creates the impression that what you are doing must be right,
  • it tempts you to overlook hidden problems and
  • you may be confusing luck with skill.

There is a discussion in Ch 9 of the kinds of things that can lead you to misunderstand the real nature of both your success and your failure. These include various cognitive biases (such as “confirmation” where you weight information that supports what you believe more than the counter evidence) and mental models we use to simplify the world in which we operate. These are hard wired into us so the best we can do is be aware of how these things can take us off track; that at least puts us ahead of those who blindly follow their mental models and biases.

His answer to building the capacity to adapt to change and respond to setbacks is to trust in people but trust does not mean you trust that people won’t make mistakes. Catmull accepts setbacks and screw ups as an inevitable part of being creative and innovative but trust is demonstrated when you support your people when they do screw up and trust them to find the solution.

This is interesting because the CBA Report indicates that CBA did in fact place a great deal of trust in their executive team and senior leaders, which implies trust alone is not enough. The missing ingredients in CBA’S case were accountability and consequence when the team failed to identify, escalate and resolve problems.

The other interesting line of speculation is whether CBA’s risk culture might have benefited from a deeper reflection on the difference between skill and luck. Maboussin’s book (The Success Equation) is particularly good in the way in which he lays out his framework for making this distinction.

I plan to come back to this topic once I have completed a review of Maboussin’s book but in the interim I can recommend all of the books mentioned in this post.

“The End of Alchemy” by Mervyn King

Anyone interested in the conceptual foundations of money and banking will I think find this book interesting. King argues that the significant enhancements to capital and liquidity requirements implemented since the GFC are not sufficient because of what he deems to be fundamental design flaws in the modern system of money and banking.

King is concerned with the process by which bank lending creates money in the form of bank deposits and with the process of maturity transformation in banking under which long term, illiquid assets are funded to varying degrees by short term liabilities including deposits. King applies the term “alchemy” to these processes to convey the sense that the value created is not real on a risk adjusted basis.

He concedes that there will be a price to pay in foregoing the “efficiency benefits of financial intermediation” but argues that these benefits come at the cost of a system that:

  • is inherently prone to banking crises because, even post Basel III, it is supported by too little equity and too little liquidity, and
  • can only be sustained in the long run by the willingness of the official sector to provide Lender of Last Resort liquidity support.

King’s radical solution is that all deposits must be 100% backed by liquid reserves which would be limited to safe assets such as government securities or reserves held with the central bank. King argues that this removes the risk/incentive for bank runs and for those with an interest in Economic History he acknowledges that this idea originated with “many of the most distinguished economists of the first half the twentieth century” who proposed an end to fractional reserve banking under a proposal that was known as the “Chicago Plan”. Since deposits are backed by safe assets, it follows that all other assets (i.e. loans to the private sector) must be financed by equity or long term debt

The intended result is to separate

  • safe, liquid “narrow” banks issuing deposits and carrying out payment services
  • from risky, illiquid “wide” banks performing all other activities.

At this point, King notes that the government could in theory simply stand back and allow the risk of unexpected events to impact the value of the equity and liabilities of the banks but he does not advocate this. This is partly because volatility of this nature can undermine consumer confidence but also because banks may be forced to reduce their lending in ways that have a negative impact on economic activity. So some form of central bank liquidity support remains necessary.

King’s proposed approach to central bank liquidity support is what he colloquially refers to as a “pawnbroker for all seasons” under which the  central bank agrees up front how much it will lend each bank against the collateral the bank can offer;

King argues that

“almost all existing prudential capital and liquidity regulation, other than a limit on leverage, could be replaced by this one simple rule”.

which “… would act as a form of mandatory insurance so that in the event of a crisis a central bank would be free to lend on terms already agreed and without the necessity of a penalty rate on its loans. The penalty, or price of the insurance, would be encapsulated by the haircuts required by the central bank on different forms of collateral”

leaving banks “… free to decide on the composition of their assets and liabilities… all subject to the constraint that alchemy in the private sector is eliminated”

Underpinning King’s thesis are four concepts that appear repeatedly

  • Disequilibrium; King explores ways in which economic disequilibrium repeatedly builds up followed by disruptive change as the economy rebalances
  • Radical uncertainty; this is the term he applies to Knight’s concept of uncertainty as distinct from risk. He uses this to argue that any risk based approach to capital adequacy is not built on sound foundations because it will not capture the uncertain dimension of unexpected loss that we should be really concerned with
  • The “prisoner’s dilemma” to illustrate the difficulty of achieving the best outcome when there are obstacles to cooperation
  • Trust; he sees trust as the key ingredient that makes a market economy work but also highlights how fragile that trust can be.

My thoughts on King’s observations and arguments

Given that King headed the Bank of England during the GFC, and was directly involved in the revised capital and liquidity rules (Basel III) that were created in response, his opinions should be taken seriously. It is particularly interesting that, notwithstanding his role in the creation of Basel III, he argues that a much more radical solution is required.

I think King is right in pointing out that the banking system ultimately relies on trust and that this reliance in part explains why the system is fragile. Trust can and does disappear, sometimes for valid reasons but sometimes because fear simply takes over even when there is no real foundation for doubting the solvency of the banking system. I think he is also correct in pointing out that a banking system based on maturity transformation is inherently illiquid and the only way to achieve 100% certainty of liquidity is to have one class of safe, liquid “narrow” banks issuing deposits and another class of risky, illiquid institution he labels “wide” banks providing funding on a maturity match funded basis. This second class of funding institution would arguably not be a bank if we reserve that term for institutions which have the right to issue “bank deposits”.

King’s explanation of the way bank lending under the fractional reserve banking system creates money covers a very important aspect of how the modern banking and finance system operates. This is a bit technical but I think it is worth understanding because of the way it underpins and shapes so much of the operation of the economy. In particular, it challenges the conventional thinking that banks simply mobilise deposits. King explains how banks do more than just mobilise a fixed pool of deposits, the process of lending in fact creates new deposits which add to the money supply. For those interested in understanding this in more depth, the Bank of England published a short article in its Quarterly Bulletin (Q1 2014) that you can find at the following link

He is also correct, I think, in highlighting the limits of what risk based capital can achieve in the face of “radical uncertainty” but I don’t buy his proposal that the leverage ratio is the solution. He claims that his “pawnbroker for all seasons” approach is different from the standardised approach to capital adequacy but I must confess I can’t see that the approaches are that different. So even if you accept his argument that internal models are not a sound basis for regulatory capital, I would still argue that a revised and well calibrated standardised approach will always be better than a leverage ratio.

King’s treatment of the “Prisoner’s Dilemma” in money and banking is particularly interesting because it sets out a conceptual rationale for why markets will not always produce optimal outcomes when there are obstacles to cooperation. This brings to mind Chuck Prince’s infamous statement about being forced to “keep dancing while the music is playing” and offers a rationale for the role of regulation in helping institutions avoid situations in which competition impedes the ability of institutions to avoid taking excessive risk. This challenges the view that market discipline would be sufficient to keep risk taking in check. It also offers a different perspective on the role of competition in banking which is sometimes seen by economists as a panacea for all ills.

I have also attached a link to a review of King’s book by Paul Krugman