Modelling bank capital requirements – The Zone of Validity

Even casual students of bank capital will be familiar with the view that advanced modelling of capital requirements is a waste of time – offering no useful insights at all and indeed dangerous to the extent these “advanced” or “sophisticated” approaches create a false sense of safety that results in excessive leverage and or credit growth.

The more technical critiques of modelling focus on the fact that a capital requirement, by definition, seeks to measure the unexpected. Mervyn King (“The End of Alchemy“), for example, argues that there is a core element of what he labels “radical uncertainty” (aka Knightian uncertainty) that cannot be modelled in the probabilistic sense that underpins the advanced approaches to capital risk measurements.

“… no amount of sophisticated statistical analysis is a match for the historical experience that “stuff happens”.  At the heart of modern macroeconomics is the same illusion that uncertainty can be confined to the mathematical manipulation of known probabilities.”

There are I think substantial elements of truth in this view. The high confidence level employed in the regulatory capital requirement was intended to bring a healthy margin of safety to the measure but the idea that a model derived answer made bank insolvency a 1: 1000 year event was never very robust once you started to look at the detail.

To be fair, the architects of Basel II were well aware of the restrictive assumptions they had made (e.g. well diversified risk, portfolio invariant) expecting that both banks and regulators would make suitable allowances for the extent to which real banks did not conform to the ideal model assumptions. In practice though, these caveats tended to get lost in the enthusiasm for a seemingly precise and robust number. Disclosure and market discipline also proved much less robust controls on bank leverage than seemed logical in the 1980’s when the Efficient Market Hypothesis reigned supreme.

There is however a “zone of validity” in which I believe that models do offer useful insight and guidance. I want to focus here on the models employed in the Internal Ratings Based (IRB) approach to credit risk; the dominant risk class for many banking systems.

I encountered the term “zone of validity” in Wilmott and Orrell’s book (The Money Formula) but I am sure the idea is not new; it is also fairly intuitive

“The key then is to keep with simple models, but make sure that the model is capturing the key dynamics of the system, and only use it within its zone of validity. Models should be seen as imperfect patches, rather than as accurate representations of the complete system. Instead of attempting … a better, more complete “theory of everything”, the aim is to find models that are useful for a particular purpose, and know when they break down”.

“The Money Formula”, Wilmott and Orrell, Chapter 8

Applying the “zone of validity” filter to IRB models

The first thing to do is distinguish the different types of models employed in the IRB framework (Yes Virginia, there is not just one monolithic IRB model). One of the people I go to when I want to understand credit risk has proposed the following taxonomy of IRB models which can be ranked in terms of robustness and empirical evidence for their effectiveness.

  • Relative Risk (Rank Ordering) Models – that measure relative risk (aka rank ordering) at an obligor level – e.g. Probability of Default (PD) rating models, Exposure at Default (EAD) segmentation models, Loss Given Default (LGD) segmentation models.
  • Point in Time (Econometric) Models – that capture the relationship between external systemic (economic) drivers and point in time default rates or loss rates – e.g. point in time PD
  • Highly Extrapolated, Point in Time, Models – Essentially the same form of model as above but applied to highly stressed or rare scenarios – e.g. stress testing models targeting a 1 in 25 year or higher scenario consistent with a severe recession
  • VAR models –  which attempt to describe the entirety of the credit loss distribution and correlation effects, usually with the goal to quantify the size of low probability tail loss events such as a 1 in 100 or 1 in 1000 year loss – e.g. capital models

Relative Risk models arguably have the widest zone of validity. We may not be able to predict precisely when individual exposures will default but models tend to do a pretty reasonable job of rank ordering the risk of default. We also have a pretty good handle on the kinds of factors that increase the severity of loss in the event of default, at least in relative terms.

From there on, the zone of validity progressively shrinks along with the degree of precision that the models are capable of offering. That said, the models can still offer useful insights so long as we understand the limitations of what they are saying.

  • Point in time models will help anchor loan loss provisioning; like any model, there will be a degree of error, but the structured approach makes it much easier to deconstruct the overall loan loss estimate and figure out where you agree or disagree with the output.
  • Highly extrapolated models such as you might see in a stress testing model clearly expand the range of error but again they offer a way of peering under the hood and seeing what part of the output looks wrong;
  • VAR models arguably do operate outside their zone of validity (at a minimum they are not a reliable measure of the 1 in 1000  year risk the bank’s capital will be sufficient) so the answers they provide need to be used with that weakness in mind.

The fixes and the potential unintended consequences

A very common response to the modelling problems discussed above is to apply more conservative values to the risk parameters (i.e PD, LGD, EAD and Correlation) employed in the IRB capital calculation. This is relatively easy to do, feels intuitively right and gives a higher capital requirement but this approach also has costs. The cost may be justified but should be recognised.

One of the issues with arbitrarily increasing risk estimates is that you start to distort the model outputs that do lie within their zone of validity. Modelling (estimating) Regulatory Expected Loss (REL), in particular, should be quite simple – just multiply PD by LGD by EAD and “voila”, we have a 1 year measure of what we expect the credit portfolio to lose at this point in the credit cycle. In principle, we can also be reasonably confident that our stressed loss estimate is useful provided we are honest about the quality of our credit portfolio and don’t get too ambitious on the level of severity. It seems to me that these applications are mostly within the zone of validity of the models we use to measure these things; and we hold capital to cover the risk that something unexpected happens.

However, in practice, we start the REL estimate by using a “downturn” measure of LGD that reflects what we expect to lose during a part of the credit cycle that may or may not coincide with where we actually are at this point in time. Next we increase PD to be conservative; again that choice of a conservative measure may or may not reflect were we actually are in the credit cycle at this point in time. The same goes for EAD. Exactly what we are measuring starts to become unclear and it is hard to see how deliberately introducing reduced clarity can ever be desirable.

My purpose here is not oppose the idea of bank capital needing to be a conservative measure (and I know that there are reasons deeply embedded in the IRB model’s history for why we use a downturn measure of LGD). My point is simply that the way that you pursue that conservatism has consequences and I have used Regulatory Expected Loss as an example. If the aim is simply to require a bank to hold more capital then there are alternatives (increase the correlation assumptions or increase the capital buffer ratio requirements) that achieve that outcome without distorting the REL measure.

Tell me what I am missing …

Tony

 

 

 

 

Author: From the Outside

After working in the Australian banking system for close to four decades, I am taking some time out to write and reflect on what I have learned. My primary area of expertise is bank capital management but this blog aims to offer a bank insider's outside perspective on banking, capital, economics, finance and risk.

10 thoughts on “Modelling bank capital requirements – The Zone of Validity”

  1. A good technical analysis. The simple mind might say that continuing to test different models and approaches will continue to refine thinking and understanding. We will never get to the perfect answer because there are too many variables. But we mud keep pushing a conservative barrow

    Like

    1. This was a pretty technical post so I am gratified that you read it.

      Testing and refining different models will help for sure. My main point was that some of the efforts to adjust for models operating outside their zone of validity can degrade the performance of associated models that are within their zone of validity.

      I accept the common critique (I cited King but there are plenty of people saying the same thing) that the VAR application of IRB models cannot achieve the high degree of coverage for regulatory capital (i.e. capital sufficient to cover all but events exceeding 1 in 1000 year severity) that its technical design purports to achieve.

      However the “fixes” typically used to compensate (i.e. higher PD, LGD, EAD) can end up adversely impacting the application of IRB models to problems that do lie within their zone of validity (i.e. point in time estimates of expected loan losses or stressed scenarios up to say 1 in 25 year severity).

      That could be seen to be the price of being conservative but I believe there are better solutions to the problem such as increasing the correlation assumptions in the IRB formula (which is what APRA did to get to a 25% minimum average RW for residential mortgages) or just applying a higher target capital ratio (though preferably via higher buffers rather than via increased minimum capital ratios).

      Like

  2. Hello I am so excited I found your blog page, I really found you by error, while I was searching on Google for something else, Regardless I am here now and would just like to say kudos for a tremendous post and a all round enjoyable blog (I also love the theme/design), I don’t have time to read through it all at the minute but I have bookmarked it and also added in your RSS feeds, so when I have time I will be back to read much more, Please do keep up the great job.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s