Even casual students of bank capital will be familiar with the view that advanced modelling of capital requirements is a waste of time – offering no useful insights at all and indeed dangerous to the extent these “advanced” or “sophisticated” approaches create a false sense of safety that results in excessive leverage and or credit growth.
The more technical critiques of modelling focus on the fact that a capital requirement, by definition, seeks to measure the unexpected. Mervyn King (“The End of Alchemy“), for example, argues that there is a core element of what he labels “radical uncertainty” (aka Knightian uncertainty) that cannot be modelled in the probabilistic sense that underpins the advanced approaches to capital risk measurements.
“… no amount of sophisticated statistical analysis is a match for the historical experience that “stuff happens”. At the heart of modern macroeconomics is the same illusion that uncertainty can be confined to the mathematical manipulation of known probabilities.”
There are I think substantial elements of truth in this view. The high confidence level employed in the regulatory capital requirement was intended to bring a healthy margin of safety to the measure but the idea that a model derived answer made bank insolvency a 1: 1000 year event was never very robust once you started to look at the detail.
To be fair, the architects of Basel II were well aware of the restrictive assumptions they had made (e.g. well diversified risk, portfolio invariant) expecting that both banks and regulators would make suitable allowances for the extent to which real banks did not conform to the ideal model assumptions. In practice though, these caveats tended to get lost in the enthusiasm for a seemingly precise and robust number. Disclosure and market discipline also proved much less robust controls on bank leverage than seemed logical in the 1980’s when the Efficient Market Hypothesis reigned supreme.
There is however a “zone of validity” in which I believe that models do offer useful insight and guidance. I want to focus here on the models employed in the Internal Ratings Based (IRB) approach to credit risk; the dominant risk class for many banking systems.
I encountered the term “zone of validity” in Wilmott and Orrell’s book (The Money Formula) but I am sure the idea is not new; it is also fairly intuitive
“The key then is to keep with simple models, but make sure that the model is capturing the key dynamics of the system, and only use it within its zone of validity. Models should be seen as imperfect patches, rather than as accurate representations of the complete system. Instead of attempting … a better, more complete “theory of everything”, the aim is to find models that are useful for a particular purpose, and know when they break down”.
“The Money Formula”, Wilmott and Orrell, Chapter 8
Applying the “zone of validity” filter to IRB models
The first thing to do is distinguish the different types of models employed in the IRB framework (Yes Virginia, there is not just one monolithic IRB model). One of the people I go to when I want to understand credit risk has proposed the following taxonomy of IRB models which can be ranked in terms of robustness and empirical evidence for their effectiveness.
- Relative Risk (Rank Ordering) Models – that measure relative risk (aka rank ordering) at an obligor level – e.g. Probability of Default (PD) rating models, Exposure at Default (EAD) segmentation models, Loss Given Default (LGD) segmentation models.
- Point in Time (Econometric) Models – that capture the relationship between external systemic (economic) drivers and point in time default rates or loss rates – e.g. point in time PD
- Highly Extrapolated, Point in Time, Models – Essentially the same form of model as above but applied to highly stressed or rare scenarios – e.g. stress testing models targeting a 1 in 25 year or higher scenario consistent with a severe recession
- VAR models – which attempt to describe the entirety of the credit loss distribution and correlation effects, usually with the goal to quantify the size of low probability tail loss events such as a 1 in 100 or 1 in 1000 year loss – e.g. capital models
Relative Risk models arguably have the widest zone of validity. We may not be able to predict precisely when individual exposures will default but models tend to do a pretty reasonable job of rank ordering the risk of default. We also have a pretty good handle on the kinds of factors that increase the severity of loss in the event of default, at least in relative terms.
From there on, the zone of validity progressively shrinks along with the degree of precision that the models are capable of offering. That said, the models can still offer useful insights so long as we understand the limitations of what they are saying.
- Point in time models will help anchor loan loss provisioning; like any model, there will be a degree of error, but the structured approach makes it much easier to deconstruct the overall loan loss estimate and figure out where you agree or disagree with the output.
- Highly extrapolated models such as you might see in a stress testing model clearly expand the range of error but again they offer a way of peering under the hood and seeing what part of the output looks wrong;
- Will the PD increase by a factor of 5 in a severe recession or is a 10 fold increase closer to the mark? – what happened in the last severe recession and why? – if house prices decline by 25% then what kind of LGD should I expect and how does the increase in PD impact the decline in prices? if I just decided to increase my PD should I also increase LGD? is the increase in loss being projected a function of some external shock impacting a sound credit portfolio or does a decline in the quality of the portfolio contribute to the severity of the losses?
- There are no precise answers to these questions but you can get a lot closer to a coherent answer if you have a decent model.
- These kinds of models also offer a much better way of calibrating capital buffers than the traditional solution of copying what the other banks are holding (who are most likely copying you) or just doing what the prudential rules prescribe (now that they define minimum capital buffers).
- VAR models arguably do operate outside their zone of validity (at a minimum they are not a reliable measure of the 1 in 1000 year risk the bank’s capital will be sufficient) so the answers they provide need to be used with that weakness in mind.
The fixes and the potential unintended consequences
A very common response to the modelling problems discussed above is to apply more conservative values to the risk parameters (i.e PD, LGD, EAD and Correlation) employed in the IRB capital calculation. This is relatively easy to do, feels intuitively right and gives a higher capital requirement but this approach also has costs. The cost may be justified but should be recognised.
One of the issues with arbitrarily increasing risk estimates is that you start to distort the model outputs that do lie within their zone of validity. Modelling (estimating) Regulatory Expected Loss (REL), in particular, should be quite simple – just multiply PD by LGD by EAD and “voila”, we have a 1 year measure of what we expect the credit portfolio to lose at this point in the credit cycle. In principle, we can also be reasonably confident that our stressed loss estimate is useful provided we are honest about the quality of our credit portfolio and don’t get too ambitious on the level of severity. It seems to me that these applications are mostly within the zone of validity of the models we use to measure these things; and we hold capital to cover the risk that something unexpected happens.
However, in practice, we start the REL estimate by using a “downturn” measure of LGD that reflects what we expect to lose during a part of the credit cycle that may or may not coincide with where we actually are at this point in time. Next we increase PD to be conservative; again that choice of a conservative measure may or may not reflect were we actually are in the credit cycle at this point in time. The same goes for EAD. Exactly what we are measuring starts to become unclear and it is hard to see how deliberately introducing reduced clarity can ever be desirable.
My purpose here is not oppose the idea of bank capital needing to be a conservative measure (and I know that there are reasons deeply embedded in the IRB model’s history for why we use a downturn measure of LGD). My point is simply that the way that you pursue that conservatism has consequences and I have used Regulatory Expected Loss as an example. If the aim is simply to require a bank to hold more capital then there are alternatives (increase the correlation assumptions or increase the capital buffer ratio requirements) that achieve that outcome without distorting the REL measure.
Tell me what I am missing …