Capital adequacy – an option to add transparency and flexibility into the “Unquestionably Strong” mix

Two of my earlier posts (here and here) discussed the potential to improve the capital adequacy framework by revisiting the calibration and operation of regulatory capital buffers. Some of the issues discussed in those posts are touched on in a discussion paper APRA has released titled “Improving the transparency, comparability and flexibility of the ADI capital framework“.

APRA is open to alternatives but sets out two options for discussion

In APRA’s words, the DP outlines

“… options to modify the ADI capital framework to improve transparency and comparability of reported capital ratios. The main conceptual approaches APRA is considering and seeking feedback on are:

  • developing more consistent disclosures without modifying the underlying capital framework; and

  • modifying the capital framework by adjusting the methodology for calculating capital ratios.”

The First Approach– “Consistent disclosure” – seems to be a beefed up version of the status quo in which APRA gets more directly involved in the comparability process by adding its imprimatur to the internationally harmonised ratios some Australian banks currently choose to disclose as an additional informal measure of capital strength.

“Under this approach, ADIs would continue to determine regulatory capital ratios using APRA’s definitions of capital and RWA. However, APRA would also specify a methodology for ADIs to determine certain adjustments to capital and RWA that could be used for disclosure (Pillar 3) purposes. As noted above, the methodology would focus on aspects of relative conservatism that are material in size and able to be calculated simply and objectively.”

APRA argues that “The supplementary disclosure would allow all stakeholders to better assess the capital strength of an ADI on a more comparable basis. However, it would result in two APRA-endorsed capital ratios: an APRA regulatory capital ratio to be compared against minimum requirements, and an additional disclosure-only capital ratio for, in particular, international comparison.”

A Second Approach – “Capital ratio adjustments” would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA.

The DP explains that this “… alternative approach would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA. This would involve removing certain aspects of relative conservatism from ADIs’ capital ratio calculations and lifting minimum regulatory capital ratio requirements in tandem. This increase in regulatory capital ratio requirements could be in the form of a transparent adjustment to minimum capital ratio requirements—for the purposes of this paper, such an adjustment is termed the ‘APRA Overlay Adjustment’.”

“To maintain overall capital adequacy, the APRA Overlay Adjustment would need to be calculated such that the total dollar amount of Prudential Capital Requirement (PCR) and Capital Conservation Buffer (CCB) would be the same as that required if these measures were not adopted. In other words, the risk-based capital requirements of ADIs would be unchanged in absolute dollar terms, maintaining financial safety, but adjustments to the numerator and the denominator of the capital ratio to be more internationally comparable would increase reported capital ratios.”

APRA clarify that

“These options are not mutually exclusive, and there is potential for both approaches to be adopted and applied in different areas.”

Approach 2 has some clear advantages …

It would make the “unquestionably strong” capital ratios more directly comparable with international peers, thereby reducing the potential for the perception of this strength to be obscured or misunderstood.

“Perception” is the key word here. It matters that the strength of Australian banks is simple, transparent and evident rather than being something where the perceivers must understand a sequence of relatively obscure and complex adjustments to fully appreciate the strength of a bank’s capital. More importantly perception matters most when the system is under stress and people do not have the time, or the inclination, to look beyond the reported numbers.

The adjusted capital ratio approach also provides opportunity to increase the flexibility of the ADI capital framework in times of stress but only to the extent to which the Overlay Adjustment is applied to the capital buffer, rather than increasing the minimum capital requirements. Higher minimum requirements would do nothing to enhance flexibility and may even be a backward step.

I believe a non zero baseline for the CCyB would also enhance the flexibility of the capital framework by virtue of the fact that it improves the odds that the banks (and APRA) have a flexible buffer in place before it is needed. This opportunity for enhanced flexibility is an option under both approaches so long as the Unquestionably Strong Benchmark maintains a material surplus over the Capital Conservation Buffer as it currently does.

But also some challenges …

APRA notes that the Adjusted Capital Ratio approach:

  • May significantly increase operational complexity for ADIs by virtue of the fact that the application of the APRA Overlay Adjustment would result in variable capital ratio requirements,

• Potentially results in variable minimum capital requirements which introduces complexity in analysing capital buffers and may undermine the desired transparency, and

• Reduces the dollar value of the 5.125 per cent (of RWA) loss absorption trigger point.

Do the advantages of the Adjusted Capital Ratio approach outweigh the challenges?

The short answer, I think, is yes … albeit with some qualifications.

So far as I can see, the added complexity only enters the discussion to the extent that some of the APRA Overlay Adjustment is applied to increase the minimum capital requirement. Most, if not all, of the operational complexity is avoided if the “Overlay Adjustment” is confined to increasing the size of the capital buffer.

Conversely, the benefits of increased responsiveness (or risk sensitivity) and flexibility lie in an increased capital buffer.

It follows then that the best way to pursue this approach is for any harmonised adjustments to the reported capital ratio to be confined to a higher CCB. This begs the question whether all the Overlay Adjustment should be applied to the capital buffer. I address that question in my responses below to some of the questions APRA has posed to solicit industry feedback.

One issue not covered in the Discussion Paper in any detail is that the capital ratios under Approach 2 will be more sensitive to any changes in the numerator. This is a simple mathematical consequence of RWA being lower if more harmonised measures are adopted. I do not see this as a problem but the heightened risk sensitivity of the framework needs to be clearly understood beforehand to minimise the potential for larger changes in capital ratios to be misunderstood. A more risk sensitive capital ratio may even be an advantage. This may not be obvious but there is a body of research which suggests a more responsive, more volatile, measure of capital adequacy can be beneficial to the extent that it prompts greater risk awareness on the part of bank management and external stakeholders. Greg Ip’s book “Foolproof” offers an introduction to some of this research but a simple example illustrating the point is the way that the benefits of improved braking in modern cars is offset to some extent by people driving faster.

APRA concludes its discussion paper with some direct questions.

There are 8 questions in all but in the context of this post I will have a go at addressing 3 of them, questions 2, 7 and 8.

Question 2: If APRA were to apply a combination of Approach 1 and Approach 2, which aspects of relative conservatism are best suited to be treated under Approach 2?

If you accept the argument that the minimum capital requirement should continue to be a set value (i.e. not subject to periodic adjustment), then the aspects of relative conservatism best suited to Approach 2 are those which can reasonably be assigned to an increase in, and regular adjustment of, one or more of the capital buffers.

Running through the list of adjustments currently applied to generate the internationally harmonised capital ratios, we can distinguish three broad categories of APRA conservatism:

  1. The extra credit risk related losses a bank might expect to experience under a very severe recession or financial crisis style scenario but not necessarily a gone concern where losses extend into the tail of the loss distribution
  2. Assets whose value depends on the ADI being a going concern and consequently are less certain to be realisable if the bank is in liquidation or has otherwise reached a point of non-viability
  3. Capital deduction intended to avoid “double counting” capital invested outside the ADI include

There are very few areas of black and white in the response to this question, but the first group are the items of APRA conservatism that I think have the clearest claim to be included in the capital buffer. These reflect potential loss scenarios that are conservative but still within the domain of plausibly severe downturns in the business cycle; this would encompass the following capital ratio adjustments:

  • the 20 per cent loss given default (LGD) portfolio constraint required for residential mortgage exposures;
  • the LGD parameter for unsecured non-retail exposures;
  • credit conversion factors (CCFs) for undrawn non-retail commitments;
  • use of supervisory slotting and the scaling factor for specialised lending;
  • risk weights for other retail exposures covered by the standardised approach to credit risk; and
  • the exchange rate used to convert Euro-denominated thresholds in the Basel capital framework into Australian dollars.

The second category are assets which have a value if the bank is a going concern but cannot necessarily be relied upon in non viability scenarios; I.e.

  • deferred tax assets arising from timing differences;
  • capitalised expenses and transaction costs
  • the capital requirement applied by APRA for IRRBB (I am open to arguments that I am being too harsh on IRRBB)

The third category captures capital that is committed to risks where the bank is taking a first loss exposure including

  • investments in other financial institutions;
  • holdings of subordinated tranches of securitisations.
  • investments in commercial entities;

Another way to explore this question is to map these three categories to the traditional graphic expression of a bank loss distribution and establish whether they are expected to lie:

  • closer to the middle of the loss distribution (sometimes framed as a 1 in 25 year downturn or the kinds of losses we expect in a severe downturn)
  • Or closer to the “tail” of the loss distribution (typically expressed as a 1 in 1000 year loss in regulatory capital terms).

To be clear, I am not seeking to ascribe any level of precision to these statistical probabilities; simply to distinguish between the relative likelihood of the items of conservatism that APRA has embedded in its current measure of capital adequacy. These three items tend to be treated as equivalent under the current approach and enhanced disclosure per Approach 1 will do nothing to address this conflation of risks.

Question 7: Would increasing the size of capital buffers (either by increasing the CCB or by setting a non-zero baseline CCyB) relative to PCR appropriately balance capital strength with financial stability through the cycle?

I have advocated the benefits of a non zero baseline CCYB in previous posts. One of these posts focused on the approach adopted by the Bank of England where I identified two advantages.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable counter cyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The BOE approach still monitors a range of core indicators for the CCyB grouped into three categories.

• The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.

• The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.

• The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the BOE implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the standard BCBS approach. The BOE might still miss the turning point but it has a head start on the problem if it does.

The BOE also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

I have discussed the BOE approach at length but the Canadian supervisor has also introduced some interesting innovations in the way that it uses a capital buffer to address the systemic risk of large banks that are worth considering as part of this review.

The second reason I favour a non zero baseline is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate.

In theory, the standard capital conservation buffer (CCB) introduced under Basel III can absorb any unexpected increase in losses and allow banks the time to progressively rebuild the buffer when economic conditions improve

In practice, the upper boundary of the CCB acts as a de facto minimum requirement such that banks face strong market pressure to immediately rebuild the buffer potentially at material cost to shareholders

There are no guarantees for what happens to banking systems under stress, but a flexible buffer that is sensitive to the state of the credit cycle is I think far more fit for purpose.

It is important to note that a non zero CCYB is an option under both approaches. There is potentially enough surplus capital in the Unquestionably Strong calibration for a non-zero CCYB to be introduced without requiring banks to raise any more capital. This would be so under either of the approaches that APRA has outlined.

So a larger buffer would be desirable from the perspective of increased comparability and transparency but the advantages of a non zero CCYB could also be pursued under the Unquestionably Strong status quo or Approach 1.

Question 8: What may be some of the potential impacts if APRA increases the prescribed loss absorption trigger point above 5.125 per cent of RWA?

The rationale for increasing the PONV Trigger is that APRA believes it is important to preserve the value of the trigger in dollar terms.

I can see that it is important to have a PONV trigger well before a bank reaches the point of insolvency (I.e. where liabilities exceed assets).

It is less clear that the reduction in the dollar value of the trigger point is sufficiently material to matter.

What really matters is the amount of contingent capital available to be converted into common equity if the PONV conversion trigger is pulled.

In the absence of this source of new capital, the fact that a bank has X billion dollars more or less of book equity (according to the financial accounts) at the point of deemed non-viability is arguably irrelevant to whether it remains a going concern.

I am also pretty sure that we do not want the operational complexity associated with a PONV trigger that moves around over time as a result of seeking to compensate for the impact of the Overlay Adjustment on capital deductions and RWA.

Do Australian banks really earn a 40% ROE on mortgage lending?

This claim features prominently in the Productivity Commission’s report on “Competition in the Australian Financial System” but the only source I can find is a newspaper article quoting a Morgan Stanley report. Mortgage lending is clearly amongst the more profitable areas of Australian banking but I would be surprised if the real return was as high as the Productivity Commission appears to believe. It is difficult to challenge the number given the absence of detail offered by the Productivity Commission but here is my checklist of things which I think the calculation is missing:

  • I suspect the capital part of the ROE calculation is based on a 25% RW but the actual capital an IRB bank is required to hold is higher than this because they also face a capital (CET1) deduction for any shortfall between their loan loss provision and the Regulatory Expected Loss (REL) assigned to these loans. This REL deduction probably adds another 3 percentage points to the effective RW an IRB bank applies
  • The 40% ROE is also I suspect based on the current point in time loan loss allowance (say circa 2-4bp per annum). Banks would dearly love to live in a world where the expected loan loss was always this low but the real return on equity for these loans should take account of what will happen to loan losses when the economic cycle turns down. Exactly how bad this loss rate will be is anyone’s guess but I would not be surprised if the loss rates were at least 5 times higher than the current rates and even more if we encounter another financial crisis.
  • It is also not clear to me what kind of cost to income ratio is being factored into the calculation
  • Charges for liquidity costs associated with funding these assets also not clear
  • Finally we have the fact that the overall ROE for banks is nowhere near 40% but residential mortgages dominate their balance sheets.

Tell me what I am missing but something does not add up …

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Stress Testing – Do (really) bad things happen to good banks?

This post will focus on stress testing in response to some recent papers the RBNZ released (July 2018) describing both its approach to stress testing and the outcomes from the 2017 stress test of the major banks and a speech by Wayne Byres (APRA) which offered some detail of the Australian side of the joint stress testing undertaken by APRA and the RBNZ. I intend to make some observations related to this specific stress testing exercise but also some broader points about the ways that stress testing is currently conducted. The overriding point is that the cyclical scenarios employed to calibrate capital buffers seem to focus on “what” happened with less consideration given to “why” the historical episodes of financial stress the scenarios mimic were so severe.

There will be technical detail in this post but the question, simply put, is to what extent do really bad things happen to good banking systems? Paraphrased in more technical language, are we calibrating for scenarios based on the impact of some random exogenous shock on a sound banking system, or does the scenario implicitly assume some systemic endogenous factors at play that made the financial system less resilient in the lead up to the shock? Endogenous factors may be embedded in the balance sheets of the banks (e.g. poor credit quality amplified by excessive credit growth) or perhaps they are a feature of the economic system (e.g. a fixed exchange rate regime such as confronted many European economies during the GFC) that may or may not be universally relevant. I am focusing on the RBNZ stress test to explore these points mostly because they offered the most detail but I believe their approach is very similar to APRA’s and the observations apply generally to macro economic stress testing.

No prizes for guessing that I will be arguing that the kinds of really severe downturns typically used to calibrate capital buffers are usually associated with conditions where endogenous forces within the banking system are a key element in explaining the extent of the asset price declines and weak recoveries and that the severity of some historical scenarios was arguably exacerbated by unhelpful exchange rate, monetary or fiscal policy settings. This is not to say that we should not be using very severe downturns to calibrate the resilience of capital buffers. My argument is simply that recognising this factor will help make more sense of how to reconcile the supervisory approach with internal stress testing and how best to respond to the consequences of such scenarios.

The RBNZ approach to stress testing

The RBNZ characterises its approach to be at the less intensive end of the spectrum of supervisory practice so “stress tests are used to provide insights into the adequacy of bank capital buffers and can highlight vulnerabilities at the bank wide level or in its various loan portfolios” but “… the use of individual bank results in setting capital buffers and promoting market discipline is relatively limited“. The RBNZ stress tests fall into three categories 1)  cyclical scenarios, 2) exploratory stress tests and 3) targeted tests.

This post will focus on the cyclical scenario which was the focus of the RBNZ’s 2017 stress test and the place where the question of what happened and why it happened is most at risk of getting lost amongst the desire to make the test tough, coupled with the often daunting task of just running the test and getting some results.

The RBNZ states that the aim of a cyclical scenario is to help “… understand the resilience of participating banks to a macroeconomic downturn” so these scenarios “… mimic some of the worst downturns in advanced economies since World War 2, and typically feature sharp declines in economic activity and property prices, and stressed funding markets”. The repetition of the benchmark cyclical downturn scenario over time also allows the RBNZ “to track the resilience of the financial system over time (although the scenario will 

It is hard to argue with calibrating the resilience of the banking system to a very high standard of safety. That said, the concern I have with cyclical scenarios drawn from worst case historical events is that the approach tends to skip over the question of why the downturn of such severity occurred.

The RBNZ commentary does recognise the “… need to take account of the nature of the specific stress scenario” and for the cyclical scenario to “evolve based on new research and insights, such as the extent of over-valuation evident in property markets” and the possibility that “domestic monetary policy and a falling exchange rate would provide a significant buffer … that was unavailable during many of these stress episodes in countries without floating exchange rates“. “Exploratory” and “Targeted” stress testing may also be focussed on the endogenous risks embedded in the banking system without explicitly using that terminology.

So if the RBNZ, and APRA, are implicitly aware of the endogenous/exogenous risk distinction, then maybe I am just being pedantic but I would argue that greater clarity on this aspect of stress testing helps in a number of areas:

  • It can help to explain why there is often a gap between:The severity of outcomes modelled internally (where the bank will probably assume their portfolios has robust credit quality and none of the systemic weaknesses that were responsible for past episodes of severe financial weakness implicit in the downturn scenario adopted by the supervisors), andThe severity the regulator expects (possibly based on a skeptical view of the extent to which bank management has balanced risk and return with the reward of higher growth and market share).
  • The types of recovery actions that can be deployed and the amounts of capital they contribute to the rebuilding process are also very much shaped by the nature of the scenario (scenarios shaped by endogenous factors embedded in the banks’ balance sheets or business models require much more substantial responses that are more costly though the cost can be a secondary issue when the scale of the challenge is so large).
  • Supervisors rightly focus on the need for banks to maintain the supply of credit to the economy but endogenous scenarios may actually require that some customers de-gear themselves and become less reliant on bank credit.

The RBNZ discussion of the 2017 stress test of the major banks focussed on the Phase 2 results and noted that:

  • The four participating banks started the stress test with an aggregate CET1 ratio of 10.3% and an aggregate buffer ratio of 5.4%
  • The impact of the combined macro economic downturn and the operational risk event saw the aggregate CET1 ratio decline by 3.4 percentage points to 6.9% in the third year; driven in order of importance by:
    • Credit losses (including the large counter party loss) – 6.6 ppts
    • Growth in RWA – 1.4 ppts
    • Dividends and other capital deductions – 1.4 ppts
    • The operational risk event for misconduct risk – 0.7 ppts
    • Underlying profits which offset the gross decline in the CET1 ratio by 6.7 ppts to arrive at the net decline of 3.4 ppts
  • Mitigating actions improved the aggregate CET1 ratio by 1.1 ppts by year three to 8%; these actions included 1) reductions in lending, 2) additional interest rate repricing and 3) operating expense reductions.

There is not a lot of detail on individual bank outcomes. In the combined scenario, individual bank CET1 ratios declined to between 6.4% to 7.4% versus the 6.9% aggregate result. The individual buffer ratios fell to between 1.2 and 1.4% at their low points (no aggregate minimum buffer was reported).

Some observations on the outcomes of the RBNZ 2017 stress test

The fact that the major banks can maintain significant buffers above minimum capital requirements during quite severe stress scenarios offers a degree of comfort, especially when you factor in the absence of mitigating responses. Minor quibble here, but it is worth noting that the aggregate data the RBNZ uses to discuss the stress testing results does not map neatly to the minimum requirements and capital buffers applied at the individual bank level. A 5.4 ppt buffer over the 4.5% CET1 buffer equates to 9.9%, not 10.3%. Looking at Figure 1 in the “outcomes” paper also shows that there was a narrower range in the CCB at its low point than there was for the CET1 ratio so part of the CCB decline observed in the stress test may be attributable to shortfalls at either the Tier 1 or Total Capital Ratio levels rather than CET1. Small point, but it does matter when interpreting what the results mean for the target capital structure and how to respond.

The RBNZ is clearly correct to question the reliability of mitigating actions and the potential for some actions, such as tightening of lending standards, to generate negative feedback effects on asset prices and economic activity. However, it is equally open to question whether the market confidence that banks rely on to fund themselves and otherwise conduct business would remain resilient in the face of a three-year continuous decline in capital ratios. So I do not think we can take too much confidence in the pre mitigation outcomes alone; the mitigating responses matter just as much.

I have always thought of the capital buffer as simply “buying time” for management to recognise the problem and craft a response that addresses the core problems in the business while creating positive momentum in capital formation. The critical question in stress testing is how much time will the markets grant before they start to hold back from dealing with your bank. Markets do not necessarily expect a magic bullet, but they do expect to see positive momentum and a coherent narrative.  It would also be useful to distinguish between a core set of actions that could reasonably be relied on and other actions that are less reliable or come at a higher cost to the business.

It is hard to comment on the specific mitigating actions since the paper only reports an aggregate benefit of 1.1 ppts over the 3 years but I can make the following general observations:

  • Reductions in lending: The potential for reduced lending to generate negative feedback effects on asset prices and economic activity is a very valid concern but I do struggle to reconcile a 35% decline in house prices with a scenario in which the loans the banking system has outstanding to this sector do not appear to have declined.
    • I can’t see any specific numbers in the RBNZ paper but that is the inference I draw if the overall loan book has not declined, which seems to be implied by the statement that the 20% RWA growth over the first three years of the scenario was primarily due to higher risk weights.
    • Loan principal is progressively being repaid on performing loans but this balance sheet shrinkage is amplified in the scenario by elevated defaults, while the rate of new lending which would otherwise be the driver of growth in outstanding must be slowing if house prices are falling by such a large amount. In addition, the reduced volume of new loans being written are I assume for lower amounts than was the case prior to the decline in house prices.
    • I am very happy to be set straight on this part of the modelling but the numbers don’t quite add up for me. If I am right then a loan book that is stable or even declining in value may be what is implied by the scenario rather than something that adds further to the stress on capital ratios. At the very least, winding back loan growth assumptions relative to the benign base case seems a reasonable response.
  • Repricing: I can’t tell from the RBNZ paper how significant this factor was in contributing to the 1.1 percentage point 3 year improvement in CET1 but I am guessing it was reasonably material. Materiality therefore requires that the numbers be subject to a higher level of scrutiny.
    • History does offer a reasonable body of evidence that Australian and NZ banks have had the capacity to reprice loans under stress and in response to higher funding costs. The question is whether the collapse in trust in big banks has undermined the value of the repricing option they have traditionally benefited from.
    • I do believe that some of the critiques of bank repricing are not well founded but that does not change the real politic of the likely public and government push back should banks attempt to do so.
    • So the answer here is probably yes; the benefits of this particular mitigating action are likely not as reliable as they have been in the past. At the very least, there is likely to be a higher cost to using them.
  • The contribution of RWA growth to the decline in the capital ratio noted in the RBNZ paper is also worth calling out. There is not a lot of detail in the paper but it does appear that the 20% increase in RWA over the first three years of the scenario was driven primarily by an increase in the average credit RW from 45% to 54%.
    • This seems to imply that there was a significant cycle driven increase in capital requirements over the course of the scenario that was not driven by an increase in loans outstanding.
    • I believe that this kind of capital measurement driven impact on capital ratios is fundamentally different from the impact of actual losses and higher new lending but it is treated as equivalent for the purposes of the analysis. This looks to me like a category error; a decline in a capital ratio due to higher risk weights is not the same thing for the purposes of solvency as a loss due to a loan defaulting.
    • The solution probably lies in a better designed approach to counter cyclical buffers (see my post here and here for background) and the regulatory treatment of expected loss, but the stress testing analysis suffers by simply noting the outcome without going behind what that component of the decline in capital ratio actually represents.

Deposit growth under a stress scenario

I also struggled with the statement in Section 5 of the RBNZ paper that “Banks expected strong growth in retail deposits, in line with their experience during the Global Financial Crisis.

  • This statement seems to reflect the intuitive view that bank deposits increase under adverse conditions as people sell risky assets and put their money in banks. But we also know that selling a risky asset requires someone else to buy it, so the increase in cash in the account of the seller is offset by the decrease in the account of the buyer. There was an increase in bank deposits during the GFC but the simple sell risky assets and put your money in the bank does not seem to explain why it happened.
  • So what do we know about the GFC? Firstly, big banks continued to grow their loan book and we know that bank credit creation leads to deposit creation. The GFC was also a scenario where the collapse of securitisation markets saw lending for residential mortgages migrate back to big bank balance sheets. I think this also creates a net increase in deposits. Banks were also paying down foreign borrowings which I think is also positive for deposit creation via the balance of payments though this channel is murkier. We also observed money migrating from equities to property lending. The selling of the risky assets is net square for deposits by itself but the deposit creation comes as the cash in the hands of the seller gets leveraged up to support new credit creation via the increased property loans which are typically geared much more highly than other types of risk assets. The shift from equity to property also seems to be driven by the typical monetary policy strategy of reducing interest rates.
  • So it is not clear to me that the pool of deposits grows under the conditions of the RBNZ scenario. We do have the likelihood that people are selling risky assets but we seem to be missing a number of the elements specific to the GFC that saw new deposits get created in the banking system. The only deposit formation positive I can see is maybe via the balance of payments but, as noted above this, channel is very murky and hard to understand.
  • The other interesting question is whether bank deposits continue to be a safe haven for New Zealanders in future crises given that the RBNZ has implemented an Open Banking Resolution regime that exposes bank deposits to the risk of being bailed-in on a pari passu basis with other unsecured bank creditors. This is a unique feature of the NZ financial system which even eschews the limited guarantees of bank deposits that many other systems see as essential to maintaining the confidence of depositors under stress.

I may well be missing something here so I am very happy to hear the other side to any of the observations I have offered above. I am big believer in the value of stress testing which is why I think it is so important to get it right.

Tony

Minsky’s Financial Instability Hypothesis – Applications in Stress Testing?

One of the issues that we keep coming back to in stress testing is whether, 1) the financial system is inherently prone to instability and crisis or 2) the system naturally tends towards equilibrium and instability is due to external shocks. Any stress scenario that we design, or that we are asked to model, will fall somewhere along this spectrum though I suspect most scenarios tend to be based on exogenous shocks. This touches on a long standing area of economic debate and hence not something that we can expect to resolve any time soon. I think it however useful to consider the question when conducting stress testing and evaluate the outcomes.

From roughly the early 1980’s until the GFC in 2008, the dominant economic paradigm has arguably been that market forces, coupled with monetary and fiscal policy built on a sound understanding of how the economy works, meant that the business cycle was dead and that the primary challenge of policy was to engineer efficient capital allocations that maximised growth. The GFC obviously highlighted shortcomings with the conventional economic approach and drew attention to an alternative approach developed by Hyman Minsky which he labelled the Financial Instability Hypothesis.

Minsky’s Financial Instability Hypothesis (FIH)

Minsky focused on borrowing and lending with varying margins of safety as a fundamental property of all capitalist economies and identified three forms

  • “Hedge” financing under which cash flow covers the repayment of principal and interest
  • “Speculative” financing under which cash flow covers interest but the principal repayments must be continually refinanced
  • “Ponzi” financing under which cash flow is insufficient to cover either interest or principal and the borrower is betting that appreciation in the value of the asset being financed will be sufficient to repay loan principal plus capitalised interest and generate a profit

The terms that Minsky uses do not strictly conform to modern usage but his basic idea is clear; increasingly speculative lending tends to be associated with increasing fragility of borrowers and the financial system as a whole. Ponzi financing is particularly problematic because the system is vulnerable to external shocks that can result in restricted access to finance or which cause asset devaluation cycle as borrowers to sell their assets in order to reduce their leverage. The downward pressure on assets prices associated with the deleveraging process then puts further pressure on the capacity to repay the loans and so on.

The term “Minsky moment” has been used to describe the inflexion point where debt levels become unsustainable and asset prices fall as investors seek to deleverage. Investor psychology is obviously one of the primary drivers in this three stage cycle; investor optimism translates to a willingness to borrow and to pay more for assets, the higher asset valuations in turn allow lenders to lend more against set loan to valuation caps. Lenders can also be caught up in the mood of optimism and take on more risk (e.g. via higher Loan Valuation Ratio limits or higher debt service coverage ratios). Minsky stated that “the fundamental assertion of the financial instability hypothesis is that the financial structure evolves from being robust to being fragile over a period in which the economy does well” (Financial Crises: Systemic or Idiosyncratic by Hyman Minsky, April 1991, p16).

It should also be noted that a Minsky moment does not require an external shock, a simple change in investor outlook or risk tolerance could be sufficient to trigger the reversal. Minsky observed that the tendency of the endogenous process he described to lead to systemic fragility and instability is constrained by institutions and interventions that he described as “thwarting systems” (“Market Processes and Thwarting Systems” by P. Ferri and H. Minsky, November 1991, p2). However Minsky’s FIH also assumes that there is a longer term cycle in which these constraints are gradually wound back allowing more and more risk to accumulate in the system over successive business cycles.

What Minsky describes is similar to the idea of a long term “financial cycle” (25 years plus) being distinct from the shorter duration “business cycle” (typically 7-10 years) – refer this post “The financial cycle and macroeconomics: What have we learnt?” for more detail. An important feature of this longer term financial cycle is a process that gradually transforms the business institutions, decision-making conventions, and structures of market governance, including regulation, which contribute to the stability of capitalist economies.

The transformation process can be broken down into two components

  1. winding back of regulation and
  2. increased risk taking

which in combination increase both the supply of and demand for risk. The process of regulatory relaxation can take a number of forms:

  • One dimension is regulatory capture; whereby the institutions designed to regulate and reduce excessive risk-taking are captured and weakened
  • A second dimension is regulatory relapse; reduced regulation may be justified on the rationale that things are changed and regulation is no longer needed but there is often an ideological foundation typically based on economic theory (e.g. the “Great Moderation” or market discipline underpinning self-regulation).
  • A third dimension is regulatory escape; whereby the supply of risk is increased through financial innovation that escapes the regulatory net because the new financial products and practices were not conceived of when existing regulation was written.

Borrowers also take on more risk for a variety of reasons:

  • First, financial innovation provides new products that allow borrowers to take on more debt or which embed higher leverage inside the same nominal value of debt.
  • Second, market participants are also subject to gradual memory loss that increases their willingness to take on risk

The changing taste for risk is also evident in cultural developments which can help explain the propensity for investors to buy shares or property. A greater proportion of the population currently invest in shares than was the case for their parents or grandparents. These individual investors are actively engaged in share investing in a way that would be unimaginable for the generations that preceded them. Owning your own home and ideally an investment property as well is an important objective for many Australians but less important in say Germany.

These changes in risk appetite can also weaken market discipline based constraints against excessive risk-taking. A book titled “The Origin of Financial Crises” by George Cooper (April 2008) is worth reading if you are interested in the ideas outlined above. A collection of Minsky’s papers can also be found here  if you are interested in exploring his thinking more deeply.

I have been doing a bit of research lately both on the question of what exactly does Expected Loss “expect” and on the ways in which cycle downturns are defined. I may be missing something, but I find this distinction between endogenous and exogenous factors largely missing from the discussion papers that I have found so far and from stress testing itself. I would greatly appreciate some suggestions if anyone has come across any good material on the issue.

Tony

Canada innovates in the capital buffer space

The Canadian prudential regulator (OFSI) has made an interesting contribution to the capital buffer space via its introduction of a Domestic Stability Buffer (DSB).

Key features of the Domestic Stability Buffer:

  • Applies only to Domestic Systemically Important Banks (D-SIB) and intended to cover a range of systemic vulnerabilities not captured by the Pillar 1 requirement
  • Vulnerabilities currently included in the buffer include (i) Canadian consumer indebtedness; (ii) asset imbalances in the Canadian market and (iii) Canadian institutional indebtedness
  • Replaces a previously undisclosed Pillar 2 loading associated with this class of risks (individual banks may still be required to hold a Pillar 2 buffer for idiosyncratic risks)
  • Initially set at 1.5% of Total RWA and will be in the range of 0 to 2.5%
  • Reviewed semi annually (June and December); with the option to change more frequently in exceptional circumstances
  • Increases phased in while decreases take effect immediately

Implications for capital planning:

  • DSB supplements the Pillar 1 buffers (Capital Conservation Buffer, D-SIB surcharge and the Countercyclical Buffer)
  • Consequently, the DSB will not result in banks being subject to the automatic constraints on capital distributions that are applied by the Pillar 1 buffers
  • Banks will be required to disclose that the buffer has been breached and the OFSI will require a remediation plan to restore the buffer

What is interesting:

  • The OFSI argues that translating the existing Pillar 2 requirement into an explicit buffer offers greater transparency which in turn “… will support banks’ ability to use this capital buffer in times of stress by increasing the market’s understanding of the purpose of the buffer and how it should be used”
  • I buy the OFSI rationale for why an explicit buffer with a clear narrative is a more usable capital tool than an undisclosed Pillar 2 requirement with the same underlying rationale
  • The OFSI retains a separate Countercyclical Buffer but this Domestic Stability Buffer seems similar but not identical in its over-riding purpose (to me at least) to the approach that the Bank of England (BoE) has adopted for managing the Countercyclical Buffer.
  • A distinguishing feature of both the BoE and OFSI approaches is linking the buffer to a simple, coherent narrative that makes the buffer more usable by virtue of creating clear expectations of the conditions under which the buffer can be used.

Bottom line is that I see useful features in both the BoE and OFSI approach to dealing with the inherent cyclicality of banking.  I don’t see  either of the proposals doing much to mitigate the cyclicality of banking but I do see them offering more potential for managing the consequences of that cyclicality. Both approaches seem to me to offer material improvements over the Countercyclical Buffer as originally conceived by the BCBS.

It will be interesting to see if APRA chooses to adapt elements of this counter cyclical approach to bank capital requirements.

If I am missing something, please let me know …

From the Outside

The answer is more loan loss provisions, what was the question?

I had been intending to write a post on the potential time bomb for bank capital embedded in IFSR9 but Adrian Docherty has saved me the trouble. He recently released an update on IFRS9 and CECL titled Much Ado About Nothing or Après Moi. Le Deluge?

This post is fairly technical so feel free to stop here if you are not a bank capital nerd. However, if you happen to read someone saying that IFRS 9 solves one of the big problems encountered by banks during the GFC then be very sceptical. Adrian (and I) believe that is very far from the truth. For those not discouraged by the technical warning, please read on.

The short version of Adrian’s note is:

  • The one-off transition impact of the new standard is immaterial and the market has  largely ignored it
  • Market apathy will persist until stressed provisions are observed
  • The dangers of ECL provisioning (procyclical volatility, complexity and subjectivity) have been confirmed by the authorities …
  • … but criticism of IFRS 9 is politically incorrect since the “correct” narrative is that earlier loan loss provisioning fulfils the G20 mandate to address the problem encountered during the GFC
  • Regulatory adaption has been limited to transition rules, which are not a solution. We need a fundamentally revised Basel regime – “Basel V” – in which lifetime ECL provisions somehow offset regulatory capital requirements.

Adrian quotes at length from Bank of England (BoE) commentary on IFRS 9. He notes that their policy intention is that the loss absorbing capacity of the banking system is not impacted by the change in accounting standards but he takes issue with the way that they have chosen to implement this policy approach. He also calls out the problem with the BoE instruction that banks should assume “perfect foresight” in their stress test calculations.

Adrian also offers a very useful deconstruction of what the European Systemic Risk Board had to say in a report they published in July 2017 . He has created a table in which he sets out what the report says on one column and what they mean in another (see page 8 of Adrian’s note).

This extract from Adrian’s note calls into question whether the solution developed is actually what the G20 asked for …

“In official documents, the authorities still cling to the assertion that ECL provisioning is good for financial stability “if soundly implemented” or “if properly applied”. They claim that the new standard “means that provisions for potential credit losses will be made in a timely way”. But what they want is contrarian, anti-cyclical ECL provisioning. This is simply not possible, in part because of human psychology but, more importantly, because the standard requires justifiable projections based on objective, consensual evidence.

Surely the authorities know they are wrong? Their arguments don’t stack up.

They hide behind repeated statements that the G20 instructed them to deliver ECL provisioning, whereas a re-read of the actual instructions clearly shows that a procyclical, subjective and complex regime was not what was asked for.

It just doesn’t add up.”

There is of course no going back at this point, so Adrian (rightly I think) argues that the solution lies in a change to banking regulation to make Basel compatible with ECL provisioning. I will quote Adrian at length here

 “So the real target is to change banking regulation, to make Basel compatible with ECL provisioning. Doing this properly would constitute a genuine “Basel V”. Yes, the markets would still need to grapple with complex and misleading IFRS 9 numbers to assess performance. But if the solvency calculation could somehow adjust properly for ECL provisions, then solvency would be stronger and less volatile.

And, in an existential way, solvency is what really matters – it’s the sina qua non  of a bank. Regulatory solvency drives the ability of a bank to grow the business and distribute capital. Accounting profit matters less than the generation of genuinely surplus solvency capital resources.

Basel V should remove or resolve the double count between lifetime ECL provisions and one-year unexpected loss (UL) capital resources. There are many different ways of doing this, for example:

A. Treat “excess provisions” (the difference between one-year ECL and lifetime ECL for Stage 2 loans) as CET1

B. Incorporate expected future margin as a positive asset, offsetting the impact of expected future credit losses

C. Reduce capital requirements by the amount of “excess provisions” (again, the difference between one-year ECL and lifetime ECL for Stage 2 loans) maybe with a floor at zero

D. Reduce minimum regulatory solvency ratios for banks with ECL provisioning (say, replacing the Basel 8% minimum capital ratio requirement to 4%)

All of these seem unpalatable at first sight! To get the right answer, there is a need to conduct a fundamental rethink. Sadly, there is no evidence that this process has started. The last time that there was good thinking on the nature of capital from Basel was some 17 years ago. It’s worth re-reading old papers to remind oneself of the interaction between expected loss, unexpected loss and income.  The Basel capital construct needs to be rebuilt to take into account the drastically different meaning of the new, post-IFRS 9 accounting equity number.”

Hopefully this post will encourage you to read Adrian’s note and to recognise that IFRS 9 is not the cycle mitigating saviour of banking it is represented to be. The core problem is not so much with IFRS9 itself (though its complexity and subjectivity are issues) but more that bank capital requirements are not constructed in a way that compensates for the inherent cyclicality of the banking industry. The ideas that Adrian has listed above are potentially part of the solution as is revisiting the way that the Counter cyclical Capital Buffer is intended to operate.

From the Outside

 

Worth Reading “The Money Formula” by Paul Wilmott and David Orrell.

The full title of this book, co-written by Paul Wilmott and David Orrell, is “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took over the Markets“. There are plenty of critiques of modelling and quantitative finance by outsiders throwing rocks but Wilmott is a quant and brings an insider’s technical knowledge to the question of what these tools can do, can’t do and perhaps most importantly should not be used to do. Consequently, the book offers a more nuanced perspective on the strengths and limitations of quantitative finance as opposed to the let’s scrap the whole thing school of thought. I have made some more detailed notes which follow the structure of the book but this post focuses on a couple of ideas I found especially interesting or useful.

I am not a quant so my comments should be read with that in mind but the core idea I took away is that, much as quants would want it otherwise, markets are not determined by fundamental laws, deterministic or probabilistic that allow risk to be measured with precision. These ideas work reasonably well within their “zone of validity” but a more complete answer (or model) has to recognise where the zones stop and uncertainty rules.  Wilmott and Orrell argue market outcomes are better thought of as the “emergent result of complex transactions”. The role of money in these emergent results is especially important, as is the capacity of models themselves to materially reshape the risk of the markets they are attempting to measure.

The Role of Money

Some quotes I have drawn from Chapter 8, will let the authors speak for themselves on the role of money …

Consider …. the nature of money. Standard economic definitions of money concentrate on its roles as a “medium of exchange,” a “store of value,” and a “unit of account.” Economists such as Paul Samuelson have focused in particular on the first, defining money as “anything that serves as a commonly accepted medium of exchange.” … ” Money is therefore not something important in itself; it is only a kind of token. The overall picture is of the economy as a giant barter system, with money acting as an inert facilitator.” (emphasis added)

“However … money is far more interesting than that, and actually harbors its own kind of lively, dualistic properties. In particular, it merges two things, number and value, which have very different properties:number lives in the abstract, virtual world of mathematics, while valued objects live in the real world. But money seems to be an active part of the system. So ignoring it misses important relationships. The tension between these contradictory aspects is what gives money its powerful and paradoxical qualities.” (Emphasis added)

The real and the virtual become blurred, in physics or in finance. And just as Newtonian theories break down in physics, so our Newtonian approach to money breaks down in economics. In particular, one consequence is that we have tended to take debt less seriously than we should. (emphasis added)

Instead of facing up to the intrinsically uncertain nature of money and the economy, relaxing some of those tidy assumptions, accepting that markets have emergent properties that resist reduction to simple laws, and building a new and more realistic theory of economics, quants instead glommed on to the idea that, when a system is unpredictable, you can just switch to making probabilistic predictions.” (emphasis added)

“The efficient market hypothesis, for example, was based on the mechanical analogy that markets are stable and perturbed randomly by the actions of atomistic individuals. This led to probabilistic risk-analysis tools such as VaR. However, in reality, the “atoms” are not independent, but are closely linked … The result is the non-equilibrium behaviour … observed in real markets. Markets are unpredictable not because they are efficient, but because of a financial version of the uncertainty principle.” (emphasis added)

 The Role of Models

Wilmott & Orrell devote a lot of attention to the ways in which models no longer just describe, but start to influence, the markets being modelled mostly by encouraging people to take on more risk based in part on a false sense of security …

“Because of the bankers’ insistence on treating complex finance as a university end-of-term exam in probability theory, many of the risks in the system are hidden. And when risks are hidden, one is led into a false sense of security. More risk is taken so that when the inevitable happens, it is worse than it could have been. Eventually the probabilities break down, disastrous events become correlated, the cascade of dominoes is triggered, and we have systemic risk …. None of this would matter if the numbers were small … but the numbers are huge” (Chapter 10 – emphasis added)

They see High Frequency Trading as the area likely to give rise to a future systemic crisis but also make a broader point about the tension between efficiency and resilience..

“With complex systems, there is usually a trade-off between efficiency and robustness …. Introducing friction into the system – for example by putting regulatory brakes on HFT – will slow the markets, but also make them more transparent and reliable. If we want a more robust and resilient system then we probably need to agree to forego some efficiency” (Chapter 10 – emphasis added)

The Laws of Finance

Wilmott and Orrell note the extent to which finance has attempted to identify laws which are analogous to the laws of physics and the ways in which these “laws” have proved to be more of a rough guide.

 “… the “law of supply and demand” …states that the market for a particular product has a certain supply, which tends to increase as the price goes up (more suppliers enter the market). There is also a certain demand for the product, which increases as the price goes down.”

“… while the supply and demand picture might capture a general fuzzy principle, it is far from being a law. For one thing, there is no such thing as a stable “demand” that we can measure independently –there are only transactions.”

“Also, the desire for a product is not independent of supply, or other factors, so it isn’t possible to think of supply and demand as two separate lines. Part of the attraction of luxury goods –or for that matter more basic things, such as housing –is exactly that their supply is limited. And when their price goes up, they are often perceived as more desirable, not less.” (emphasis added)

This example is relevant for banking systems (such as Australia) where residential mortgage lending dominates the balance sheets of the banks. Even more so given that public debate of the risk associated with housing seems often to be predicated on the economics 101 version of the laws of supply and demand.

The Power (and Danger) of Ideas

A recurring theme throughout the book is the ways in which economists and quants have borrowed ideas from physics without recognising the limitations of the analogies and assumptions they have relied on to do so. Wilmott and Orrell credit Sir Issac Newton as one of the inspirations behind Adam Smith’s idea of the “Invisible Hand” co-ordinating  the self interested actions of individuals for the good of society. When the quantum revolution saw physics embrace a probabilistic approach, economists followed.

I don’t think Wilmott and Orrell make this point directly but a recurring thought reading the book was the power of ideas to not just interpret the underlying reality but also to shape the way the economy and society develops not always for the better.

  • Economic laws that drive markets towards equilibrium as their natural state
  • The “invisible hand” operating in markets to reconcile individual self interest with optimal outcomes for society as a whole
  • The Efficient Market Hypothesis as an explanation for why markets are unpredictable

These ideas have widely influenced quantitative finance in a variety of domains and they all contribute useful insights; the key is to not lose sight of their zone of validity.

…. Finance … took exactly the wrong lesson from the quantum revolution. It held on to its Newtonian, mechanistic, symmetric picture of an intrinsically stable economy guided to equilibrium by Adam Smith’s invisible hand. But it adopted the probabilistic mathematics of stochastic calculus.” (emphasis added) Chapter 8

Where to from here?

It should be obvious by now that the authors are arguing that risk and reward cannot be reduced to hard numbers in the ways that physics has used similar principles and tools to generate practical insights into how the world works. Applying a bit of simple math in finance seems to open up the door to getting some control over an unpredictable world and, even better, to pursue optimisation strategies that allow the cognoscenti to optimise the balance between risk and reward. There is room for more complex math as well for those so inclined but the book sides with the increasingly widely held views that simple math is enough to get you into trouble and further complexity is best avoided if possible.

Wilmott and Orrell highlight mathematical biology in general and a book by Jim Murray on the topic as a source for better ways to approach many of the more difficult modelling challenges in finance and economics. They start by listing a series of phenomena in biological models that seem to be useful analogues for what happens in financial markets. They concede that a number of models used in mathematical biology that are almost all “toy” models. None of these models offer precise or determined outcomes but all can be used to explain what is happening in nature and offer insights into solutions for problems like disease control, epidemics, conservation etc.

The approach they advocate seems have a lot in common with the Agent Based Modelling approach that Andrew Haldane references (see his paper on “Tails of the Unexpected“) and that is the focus of Bookstabber’s book (“The End of Theory”).

In their words …

“Embrace the fact that the models are toy, and learn to work within any limitations.”

Focus more attention on measuring and managing resulting model risk, and less time on complicated new products.”

“… only by remaining both skeptical and agile can we learn. Keep your models simple, but remember they are just things you made up, and be ready to update them as new information comes in.”

I fear I have not done the book justice but I got a lot out of it and can recommend it highly.

 

 

The financial cycle and macroeconomics: What have we learnt? BIS Working Paper

Claudio Borio at the BIS wrote an interesting paper exploring the “financial cycle”. This post seeks to summarise the key points of the paper and draw out some implications for bank stress testing (the original paper can be found here).  The paper was published in December 2012, so its discussion of the implications for macroeconomic modelling may be dated but I believe it continues to have some useful insights for the challenges banks face in dealing with adverse economic conditions and the boundary between risk and uncertainty.

Key observations Borio makes regarding the Financial Cycle

The concept of a “business cycle”, in the sense of there being a regular occurrence of peaks and troughs in business activity, is widely known but the concept of a “financial cycle” is a distinct variation on this theme that is possibly less well understood. Borio states that there is no consensus definition but he uses the term to

“denote self-reinforcing interactions between perceptions of value and risk, attitudes towards risk and financing constraints, which translate into booms followed by busts. These interactions can amplify economic fluctuations and possibly lead to serious financial distress and economic disruption”.

This definition is closely related to the concept of “procyclicality” in the financial system and should not be confused with a generic description of cycles in economic activity and asset prices. Borio does not use these words but I have seen the term “balance sheet recession” employed to describe much the same phenomenon as Borio’s financial cycle.

Borio identifies five features that describe the Financial Cycle

  1. It is best captured by the joint behaviour of credit and property prices – these variables tend to closely co-vary, especially at low frequencies, reflecting the importance of credit in the financing of construction and the purchase of property.
  2. It is much longer, and has a much larger amplitude, than the traditional business cycle – the business cycle involves frequencies from 1 to 8 years whereas the average length of the financial cycle is longer; Borio cites a cycle length of 16 years in a study of seven industrialised economies and I have seen other studies indicating a longer cycle (with more severe impacts).
  3. It is closely associated with systemic banking crises which tend to occur close to its peak.
  4. It permits the identification of the risks of future financial crises in real time and with a good lead – Borio states that the most promising leading indicators of financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms.
  5. And it is highly dependent of the financial, monetary and real-economy policy regimes in place (e.g. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy).

Macro economic modelling

Borio also argues that the conventional models used to analyse the economy are deficient because they do not capture the dynamics of the financial cycle. These extracts capture the main points of his critique:

“The notion… of financial booms followed by busts, actually predates the much more common and influential one of the business cycle …. But for most of the postwar period it fell out of favour. It featured, more or less prominently, only in the accounts of economists outside the mainstream (eg, Minsky (1982) and Kindleberger (2000)). Indeed, financial factors in general progressively disappeared from macroeconomists’ radar screen. Finance came to be seen effectively as a veil – a factor that, as a first approximation, could be ignored when seeking to understand business fluctuations … And when included at all, it would at most enhance the persistence of the impact of economic shocks that buffet the economy, delaying slightly its natural return to the steady state …”

“Economists are now trying hard to incorporate financial factors into standard macroeconomic models. However, the prevailing, in fact almost exclusive, strategy is a conservative one. It is to graft additional so-called financial “frictions” on otherwise fully well behaved equilibrium macroeconomic models, built on real-business-cycle foundations and augmented with nominal rigidities. The approach is firmly anchored in the New Keynesian Dynamic Stochastic General Equilibrium (DSGE) paradigm.”

“The purpose of this essay is to summarise what we think we have learnt about the financial cycle over the last ten years or so in order to identify the most promising way forward…. The main thesis is that …it is simply not possible to understand business fluctuations and their policy challenges without understanding the financial cycle”

There is an interesting discussion of the public policy (i.e. prudential, fiscal, monetary) associated with recognising the role of the financial cycle but I will focus on what implications this may have for bank management in general and stress testing in particular.

Insights and questions we can derive from the paper

The observation that financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms covers much the same ground as the Basel Committee’s Countercyclical Capital Buffer (CCyB) and is something banks would already monitor as part of the ICAAP. The interesting question the paper poses for me is the extent to which stress testing (and ICAAP) should focus on a “financial cycle” style disruption as opposed to a business cycle event. Even more interesting is the question of whether the higher severity of the financial cycle is simply an exogenous random variable or an endogenous factor that can be attributed to excessive credit growth. 

I think this matters because it has implications for how banks calibrate their overall risk appetite. The severity of the downturns employed in stress testing has in my experience gradually increased over successive iterations. My recollection is that this has partly been a response to prudential stress tests which were more severe in some respects than might have been determined internally. In the absence of any objective absolute measure of what was severe, it probably made sense to turn up the dial on severity in places to align as far as possible the internal benchmark scenarios with prudential benchmarks such as the “Common Scenario” APRA employs.

At the risk of a gross over simplification, I think that banks started the stress testing process looking at both moderate downturns (e.g. 7-10 year frequency and relatively short duration) and severe recessions (say a 25 year cycle though still relatively short duration downturn). Bank supervisors  in contrast have tended to focus more on severe recession and financial cycle style severity scenarios with more extended durations. Banks’s have progressively shifted their attention to scenarios that are more closely aligned to the severe recession assumed by supervisors in part because moderate recessions tend to be fairly manageable from a capital management perspective.

Why does the distinction between the business cycle and the financial cycle matter?

Business cycle fluctuations (in stress testing terms a “moderate recession”) are arguably an inherent feature of the economy that occur largely independently of the business strategy and risk appetite choices that banks make. However, Borio’s analysis suggests that the decisions that banks make (in particular the rate of growth in credit relative to growth in GDP and the extent to which the extension of bank credit contributes to inflated asset values) do contribute to the risk (i.e. probability, severity and duration) of a severe financial cycle style recession. 

Borio’s analysis also offers a way of thinking about the nature of the recovery from a recession. A moderate business cycle style recession is typically assumed to be short with a relatively quick recovery whereas financial cycle style recessions typically persist for some time. The more drawn out recovery from a financial cycle style recession can be explained by the need for borrowers to deleverage and repair their balance sheets as part of the process of addressing the structural imbalances that caused the downturn.

If the observations above are true, then they suggest a few things to consider:

  • should banks explore a more dynamic approach to risk appetite limits that incorporated the metrics identified by Borio (and also used in the calibration of the CCyB) so that the level of risk they are willing to take adjusts for where they believe they are in the state of the cycle (and which kind of cycle we are in)
  • how should banks think about these more severe financial cycle losses? Their measure of Expected Loss should clearly incorporate the losses expected from business cycle style moderate recessions occurring once every 7-10 years but it is less clear that the kinds of more severe and drawn out losses expected under a Severe Recession or Financial Cycle downturn should be part of Expected Loss.

A more dynamic approach to risk appetite get us into some interesting game theory  puzzles because a decision by one bank to pull back on risk appetite potentially allows competitors to benefit by writing more business and potentially doubly benefiting to the extent that the decision to pull back makes it safer for competitors to write the business without fear of a severe recession (in technical economist speak we have a “collective action” problem). This was similar to the problem APRA faced when it decided to impose “speed limits” on certain types of lending in 2017. The Royal Commission was not especially sympathetic to the strategic bind banks face but I suspect that APRA understand the problem.

How do shareholders think about these business and financial cycle losses? Some investors will adopt a “risk on-risk off” approach in which they attempt to predict the downturn and trade in and out based on that view, other “buy and hold” investors (especially retail) may be unable or unwilling to adopt a trading approach.

The dependence of the financial cycle on the fiscal and monetary policy regimes in place and changes in the real-economy also has potential implications for how banks think about the risk of adverse scenarios playing out. Many of the factors that Borio argues have contributed to the financial cycle (i.e. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy) are reversing (regulation of banks is much more restrictive, monetary policy appears to have recognised the limitations of a narrow inflation target focus and the pace of globalisation appears to be slowing in response to a growing concern that its benefits are not shared equitably). I am not sure exactly what these changes mean other than to recognise that they should in principle have some impact. At a minimum it seems that the pace of credit expansion might be slower in the coming decades than it has in the past 30 years.

All in all, I find myself regularly revisiting this paper, referring to it or employing the distinction between the business and financial cycle. I would recommend it to anyone interested in bank capital management. 

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.