Will Expected Loss loan provisioning reduce pro cyclicality?

I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had  to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.

Borio’s position is that:

  • Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
  • But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
  • So there is a need to develop what he calls “capital filters” (a generic term encompassing   capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.

There are two ways to respond to Claudio Borio’s observations on this topic:

  1. One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
  2. The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions

Will ECL reduce the risk of pro cyclicality?

It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.

The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.

This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.

The solution to pro cyclicality lies in capital not expected loss

So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:

  • let accounting standards do what they do, don’t try to make them solve prudential problems;
  • construct a capital adequacy solution that complements the accounting based measurement of capital and profits.

Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.

In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.

Tony

Mortgage risk weights – fact check

It is frequently asserted that the major Australian banks have been “gifted” a substantially lower mortgage risk weight than the smaller banks. To be precise, the assertion is that the major banks are only required to hold capital based on a 25% risk weight versus 39% for smaller banks.

If you are not familiar with the arcane detail of bank capital adequacy, then you could be forgiven for concluding that this differential (small banks apparently required to hold 56% more capital for the same risk) is outrageous and unfair. While the risk weights for big banks are certainly lower on average than those required of small banks, I believe the difference in capital requirements is not as large as the simple comparison of risk weights suggests.

Bank capital requirements involve more than risk weights

To understand why this comparison of risk weights is misleading, it will be helpful to start with a quick primer on bank capital requirements. The topic can be hugely complex but, reduced to its essence, there are three elements that drive the amount of capital a bank holds:

  1. The risk weights applied to its assets
  2. The target capital ratio applied to those risk weighted assets
  3. Any capital deductions required when calculating the capital ratio

Problem 1 – Capital adequacy ratios differ

The comparison of capital requirements based on risk weights implicitly assumes that the regulator applies the same capital ratio requirement to all banks, but this is not the case. Big banks are targeting CET1 ratios based on the 10.5% Unquestionably Strong benchmark set by APRA while there is a greater range of practice amongst the smaller banks. Bendigo and Suncorp appear to be targeting a CET1 ratio in the range of 8.5 to 9.0% while the smaller of the small banks appear to be targeting CET1 ratios materially higher (say 15% or more).

If we confine the comparison to the alleged disadvantage suffered by Bendigo and Suncorp, then the higher risk weights they are required to apply to residential mortgages is substantially offset by the lower CET1 target ratios that they target (the 56% difference in capital required shrinks to something in the order of 30% if you adjust for the difference in target CET1 ratios).

Broadening the comparison to the smaller banks gets even more interesting. At face value the much higher CET1 ratios they appear to target suggest that they are doubly penalised in the required capital comparison but you have to ask why are they targeting such high CET1 ratios. One possible explanation is that the small less diversified mortgage exposures are in fact more risky than the more diversified exposures maintained by their larger competitors.

Problem 2 – You have to include capital deductions

This is quite technical I recognise but, in addition to the capital tied to the risk weight, the big banks are also required to hold capital for a capital deduction linked to the difference between their loan loss provisions and a regulatory capital value called “Regulatory Expected Loss”. This capital deduction increases the effective risk weight. The exact amount varies from bank to bank but I believe it increases the effective capital requirement by 10-12% (I.e. an effective RW closer to 28%). My understanding is that small banks are not required to make the same capital deduction.

Problem 3 – The Standardised risk weights for residential mortgages seem set to change

A complete discussion of the RW difference should also take account of the fact that APRA has proposed to introduce lower RW Categories for the smaller banks such their average RW may be lower than 39% in the future. I don’t know what the average RW for small banks would be under these new RW but that is a question you could put to the banks who use the 39% figure without acknowledging this fact.

Problem 4 – The risk of a mortgage depends on the portfolio not the individual loan

The statement that a loan is the same risk irrespective of whether it is written by a big bank or small bank sounds intuitively logical but is not correct. The risk of a loan can only be understood when it is considered as part of the portfolio the bank holds. Small banks will typically be less diversified than a big bank.

Problem 5 – What about the capital required for Interest Rate Risk in the Banking Book (IRRBB)?

I don’t have sufficient data to assess how significant this is, but intuitively I would expect that the capital that the major banks are required to hold for IRRBB will further narrow the effective difference between the risk weights applied to residential mortgages.

Summing up

My aim in this post was not to defend the big banks but rather to try to contribute some of the knowledge I have acquired working in this area to what I think is an important but misunderstood question. In the interests of full disclosure, I have worked for one of the large Australian banks and may continue to do work for them in the future.

On a pure risk basis, it seems to me that the loan portfolio of a large bank will tend to be more diversified, and hence lower risk, than that of a smaller bank. It is not a “gift” for risk weights to reflect this.

There is a legitimate debate to be had regarding whether small banks should be given (gifted?) an advantage that helps them compete against the big banks. That debate however should start with a proper understanding of the facts about how much advantage the large banks really have and the extent to which their lower risk weights reflect lower risk.

If you disagree tell me what I am missing …

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

The answer is more loan loss provisions, what was the question?

I had been intending to write a post on the potential time bomb for bank capital embedded in IFSR9 but Adrian Docherty has saved me the trouble. He recently released an update on IFRS9 and CECL titled Much Ado About Nothing or Après Moi. Le Deluge?

This post is fairly technical so feel free to stop here if you are not a bank capital nerd. However, if you happen to read someone saying that IFRS 9 solves one of the big problems encountered by banks during the GFC then be very sceptical. Adrian (and I) believe that is very far from the truth. For those not discouraged by the technical warning, please read on.

The short version of Adrian’s note is:

  • The one-off transition impact of the new standard is immaterial and the market has  largely ignored it
  • Market apathy will persist until stressed provisions are observed
  • The dangers of ECL provisioning (procyclical volatility, complexity and subjectivity) have been confirmed by the authorities …
  • … but criticism of IFRS 9 is politically incorrect since the “correct” narrative is that earlier loan loss provisioning fulfils the G20 mandate to address the problem encountered during the GFC
  • Regulatory adaption has been limited to transition rules, which are not a solution. We need a fundamentally revised Basel regime – “Basel V” – in which lifetime ECL provisions somehow offset regulatory capital requirements.

Adrian quotes at length from Bank of England (BoE) commentary on IFRS 9. He notes that their policy intention is that the loss absorbing capacity of the banking system is not impacted by the change in accounting standards but he takes issue with the way that they have chosen to implement this policy approach. He also calls out the problem with the BoE instruction that banks should assume “perfect foresight” in their stress test calculations.

Adrian also offers a very useful deconstruction of what the European Systemic Risk Board had to say in a report they published in July 2017 . He has created a table in which he sets out what the report says on one column and what they mean in another (see page 8 of Adrian’s note).

This extract from Adrian’s note calls into question whether the solution developed is actually what the G20 asked for …

“In official documents, the authorities still cling to the assertion that ECL provisioning is good for financial stability “if soundly implemented” or “if properly applied”. They claim that the new standard “means that provisions for potential credit losses will be made in a timely way”. But what they want is contrarian, anti-cyclical ECL provisioning. This is simply not possible, in part because of human psychology but, more importantly, because the standard requires justifiable projections based on objective, consensual evidence.

Surely the authorities know they are wrong? Their arguments don’t stack up.

They hide behind repeated statements that the G20 instructed them to deliver ECL provisioning, whereas a re-read of the actual instructions clearly shows that a procyclical, subjective and complex regime was not what was asked for.

It just doesn’t add up.”

There is of course no going back at this point, so Adrian (rightly I think) argues that the solution lies in a change to banking regulation to make Basel compatible with ECL provisioning. I will quote Adrian at length here

 “So the real target is to change banking regulation, to make Basel compatible with ECL provisioning. Doing this properly would constitute a genuine “Basel V”. Yes, the markets would still need to grapple with complex and misleading IFRS 9 numbers to assess performance. But if the solvency calculation could somehow adjust properly for ECL provisions, then solvency would be stronger and less volatile.

And, in an existential way, solvency is what really matters – it’s the sina qua non  of a bank. Regulatory solvency drives the ability of a bank to grow the business and distribute capital. Accounting profit matters less than the generation of genuinely surplus solvency capital resources.

Basel V should remove or resolve the double count between lifetime ECL provisions and one-year unexpected loss (UL) capital resources. There are many different ways of doing this, for example:

A. Treat “excess provisions” (the difference between one-year ECL and lifetime ECL for Stage 2 loans) as CET1

B. Incorporate expected future margin as a positive asset, offsetting the impact of expected future credit losses

C. Reduce capital requirements by the amount of “excess provisions” (again, the difference between one-year ECL and lifetime ECL for Stage 2 loans) maybe with a floor at zero

D. Reduce minimum regulatory solvency ratios for banks with ECL provisioning (say, replacing the Basel 8% minimum capital ratio requirement to 4%)

All of these seem unpalatable at first sight! To get the right answer, there is a need to conduct a fundamental rethink. Sadly, there is no evidence that this process has started. The last time that there was good thinking on the nature of capital from Basel was some 17 years ago. It’s worth re-reading old papers to remind oneself of the interaction between expected loss, unexpected loss and income.  The Basel capital construct needs to be rebuilt to take into account the drastically different meaning of the new, post-IFRS 9 accounting equity number.”

Hopefully this post will encourage you to read Adrian’s note and to recognise that IFRS 9 is not the cycle mitigating saviour of banking it is represented to be. The core problem is not so much with IFRS9 itself (though its complexity and subjectivity are issues) but more that bank capital requirements are not constructed in a way that compensates for the inherent cyclicality of the banking industry. The ideas that Adrian has listed above are potentially part of the solution as is revisiting the way that the Counter cyclical Capital Buffer is intended to operate.

From the Outside

 

Worth Reading “The Money Formula” by Paul Wilmott and David Orrell.

The full title of this book, co-written by Paul Wilmott and David Orrell, is “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took over the Markets“. There are plenty of critiques of modelling and quantitative finance by outsiders throwing rocks but Wilmott is a quant and brings an insider’s technical knowledge to the question of what these tools can do, can’t do and perhaps most importantly should not be used to do. Consequently, the book offers a more nuanced perspective on the strengths and limitations of quantitative finance as opposed to the let’s scrap the whole thing school of thought. I have made some more detailed notes which follow the structure of the book but this post focuses on a couple of ideas I found especially interesting or useful.

I am not a quant so my comments should be read with that in mind but the core idea I took away is that, much as quants would want it otherwise, markets are not determined by fundamental laws, deterministic or probabilistic that allow risk to be measured with precision. These ideas work reasonably well within their “zone of validity” but a more complete answer (or model) has to recognise where the zones stop and uncertainty rules.  Wilmott and Orrell argue market outcomes are better thought of as the “emergent result of complex transactions”. The role of money in these emergent results is especially important, as is the capacity of models themselves to materially reshape the risk of the markets they are attempting to measure.

The Role of Money

Some quotes I have drawn from Chapter 8, will let the authors speak for themselves on the role of money …

Consider …. the nature of money. Standard economic definitions of money concentrate on its roles as a “medium of exchange,” a “store of value,” and a “unit of account.” Economists such as Paul Samuelson have focused in particular on the first, defining money as “anything that serves as a commonly accepted medium of exchange.” … ” Money is therefore not something important in itself; it is only a kind of token. The overall picture is of the economy as a giant barter system, with money acting as an inert facilitator.” (emphasis added)

“However … money is far more interesting than that, and actually harbors its own kind of lively, dualistic properties. In particular, it merges two things, number and value, which have very different properties:number lives in the abstract, virtual world of mathematics, while valued objects live in the real world. But money seems to be an active part of the system. So ignoring it misses important relationships. The tension between these contradictory aspects is what gives money its powerful and paradoxical qualities.” (Emphasis added)

The real and the virtual become blurred, in physics or in finance. And just as Newtonian theories break down in physics, so our Newtonian approach to money breaks down in economics. In particular, one consequence is that we have tended to take debt less seriously than we should. (emphasis added)

Instead of facing up to the intrinsically uncertain nature of money and the economy, relaxing some of those tidy assumptions, accepting that markets have emergent properties that resist reduction to simple laws, and building a new and more realistic theory of economics, quants instead glommed on to the idea that, when a system is unpredictable, you can just switch to making probabilistic predictions.” (emphasis added)

“The efficient market hypothesis, for example, was based on the mechanical analogy that markets are stable and perturbed randomly by the actions of atomistic individuals. This led to probabilistic risk-analysis tools such as VaR. However, in reality, the “atoms” are not independent, but are closely linked … The result is the non-equilibrium behaviour … observed in real markets. Markets are unpredictable not because they are efficient, but because of a financial version of the uncertainty principle.” (emphasis added)

 The Role of Models

Wilmott & Orrell devote a lot of attention to the ways in which models no longer just describe, but start to influence, the markets being modelled mostly by encouraging people to take on more risk based in part on a false sense of security …

“Because of the bankers’ insistence on treating complex finance as a university end-of-term exam in probability theory, many of the risks in the system are hidden. And when risks are hidden, one is led into a false sense of security. More risk is taken so that when the inevitable happens, it is worse than it could have been. Eventually the probabilities break down, disastrous events become correlated, the cascade of dominoes is triggered, and we have systemic risk …. None of this would matter if the numbers were small … but the numbers are huge” (Chapter 10 – emphasis added)

They see High Frequency Trading as the area likely to give rise to a future systemic crisis but also make a broader point about the tension between efficiency and resilience..

“With complex systems, there is usually a trade-off between efficiency and robustness …. Introducing friction into the system – for example by putting regulatory brakes on HFT – will slow the markets, but also make them more transparent and reliable. If we want a more robust and resilient system then we probably need to agree to forego some efficiency” (Chapter 10 – emphasis added)

The Laws of Finance

Wilmott and Orrell note the extent to which finance has attempted to identify laws which are analogous to the laws of physics and the ways in which these “laws” have proved to be more of a rough guide.

 “… the “law of supply and demand” …states that the market for a particular product has a certain supply, which tends to increase as the price goes up (more suppliers enter the market). There is also a certain demand for the product, which increases as the price goes down.”

“… while the supply and demand picture might capture a general fuzzy principle, it is far from being a law. For one thing, there is no such thing as a stable “demand” that we can measure independently –there are only transactions.”

“Also, the desire for a product is not independent of supply, or other factors, so it isn’t possible to think of supply and demand as two separate lines. Part of the attraction of luxury goods –or for that matter more basic things, such as housing –is exactly that their supply is limited. And when their price goes up, they are often perceived as more desirable, not less.” (emphasis added)

This example is relevant for banking systems (such as Australia) where residential mortgage lending dominates the balance sheets of the banks. Even more so given that public debate of the risk associated with housing seems often to be predicated on the economics 101 version of the laws of supply and demand.

The Power (and Danger) of Ideas

A recurring theme throughout the book is the ways in which economists and quants have borrowed ideas from physics without recognising the limitations of the analogies and assumptions they have relied on to do so. Wilmott and Orrell credit Sir Issac Newton as one of the inspirations behind Adam Smith’s idea of the “Invisible Hand” co-ordinating  the self interested actions of individuals for the good of society. When the quantum revolution saw physics embrace a probabilistic approach, economists followed.

I don’t think Wilmott and Orrell make this point directly but a recurring thought reading the book was the power of ideas to not just interpret the underlying reality but also to shape the way the economy and society develops not always for the better.

  • Economic laws that drive markets towards equilibrium as their natural state
  • The “invisible hand” operating in markets to reconcile individual self interest with optimal outcomes for society as a whole
  • The Efficient Market Hypothesis as an explanation for why markets are unpredictable

These ideas have widely influenced quantitative finance in a variety of domains and they all contribute useful insights; the key is to not lose sight of their zone of validity.

…. Finance … took exactly the wrong lesson from the quantum revolution. It held on to its Newtonian, mechanistic, symmetric picture of an intrinsically stable economy guided to equilibrium by Adam Smith’s invisible hand. But it adopted the probabilistic mathematics of stochastic calculus.” (emphasis added) Chapter 8

Where to from here?

It should be obvious by now that the authors are arguing that risk and reward cannot be reduced to hard numbers in the ways that physics has used similar principles and tools to generate practical insights into how the world works. Applying a bit of simple math in finance seems to open up the door to getting some control over an unpredictable world and, even better, to pursue optimisation strategies that allow the cognoscenti to optimise the balance between risk and reward. There is room for more complex math as well for those so inclined but the book sides with the increasingly widely held views that simple math is enough to get you into trouble and further complexity is best avoided if possible.

Wilmott and Orrell highlight mathematical biology in general and a book by Jim Murray on the topic as a source for better ways to approach many of the more difficult modelling challenges in finance and economics. They start by listing a series of phenomena in biological models that seem to be useful analogues for what happens in financial markets. They concede that a number of models used in mathematical biology that are almost all “toy” models. None of these models offer precise or determined outcomes but all can be used to explain what is happening in nature and offer insights into solutions for problems like disease control, epidemics, conservation etc.

The approach they advocate seems have a lot in common with the Agent Based Modelling approach that Andrew Haldane references (see his paper on “Tails of the Unexpected“) and that is the focus of Bookstabber’s book (“The End of Theory”).

In their words …

“Embrace the fact that the models are toy, and learn to work within any limitations.”

Focus more attention on measuring and managing resulting model risk, and less time on complicated new products.”

“… only by remaining both skeptical and agile can we learn. Keep your models simple, but remember they are just things you made up, and be ready to update them as new information comes in.”

I fear I have not done the book justice but I got a lot out of it and can recommend it highly.

 

 

Looking under the hood – The IRB formula

This post is irredeemably technical so stop here if that is not your interest. If you need to understand some of the mechanics of the formula used to calculate credit risk weighted assets under the advanced Internal Ratings Based (IRB) approach then the BCBS published a paper in 2005 which offers an explanation:

  • describing the economic foundations
  • as well as the underlying mathematical model and its input parameters.

While a lot has changed as a result of Basel III, the models underlying the calculation of Internal Rating Based Capital (IRB) requirements are still based on the core principles agreed under Basel II that are explained in this BCBS paper.

The notes in the linked page below mostly summarise the July 2005 paper with some emphasis (bolded text) and comments (in italics) that I have added. The paper is a bit technical but worth reading if you want to understand the original thinking behind the Basel II risk weights for credit risk.

I initially found the paper useful for revisiting the foundation assumptions of the IRB framework as background to considering the regulatory treatment of Expected Loss as banks transition to IFRS9. The background on how the RW was initially intended to cover both Expected and Unexpected Loss, but was revised such that capital was only required to cover Unexpected Loss, is especially useful when considering the interaction of loan loss provisioning with capital requirements.

Reading the BCBS paper has also been useful for thinking through a range of related issues including:

  • The rationale for, and impact of, prudential conservatism in setting the risk parameters used in the IRB formula
  • The cyclicality of a risk sensitive capital requirement (and potential for pro cyclicality) and what might be done to mitigate the risk of pro-cyclical impacts on the economy

If you have read this far then my summary of the BCBS paper and my comments /observations can be found here (and thank you).

I am not a credit risk model expert, so the summary of the paper and my comments must be read with that in mind. I did this to help me think through some of the issues with bank capital adequacy. Hopefully others will find the notes useful. If you see something wrong or something you disagree with then let me know.

APRA’s proposed revisions to capital requirements for residential mortgages

… there is a lot to like in what APRA have proposed but also some issues that would benefit from further thought

Many readers will be aware that APRA released a Discussion Paper (DP) last week titled “Revisions to the capital framework for authorised deposit-taking institutions”.   The paper sets out APRA’s proposed changes to ADI capital requirements defined by the Internal Ratings Based Approach (IRB) and Standardised Approach to Credit Risk, Interest Rate Risk in the Banking Book (IRRBB) and Operational Risk. The focus of this post will be the proposals impacting credit risk capital requirements for residential mortgage lending. This post presupposes that the reader is familiar with the detail of what APRA has proposed. For those of you who have not yet got around to reading the whole paper I have added a short summary of the proposals below (see “APRA’s proposals – in more detail”).

My gut reaction is that there is a lot to like in what APRA have proposed but there are also issues that deserve further consideration in order to address the risk of unintended consequence and to better deliver on the objectives of consistency, transparency and competitive neutrality.

Proposals which make sense to me:

  • The increased risk sensitivity of the proposed standardised RWs for residential mortgages is, I believe, a material enhancement of the capital adequacy framework
  • There are arguments (and indeed evidence) for why investor property loans can be as low risk as owner occupier loans (most of the  time) but APRA’s desire to address the systemic tail risk of this form of lending is I think an understandable policy objective for a prudential regulator to pursue
  • Continuing to pursue higher IRB RW via changes to the correlation factor also looks to be a better approach than the 20% floor on LGD currently applied and thankfully also up for revision
  • Applying a higher correlation factor to low PD loans also makes intuitive sense, especially if your primary concern is the systemic risk associated with the residential mortgage lending that dominates the balance sheets of your banking system
  • In addition, the potential for the correlation adjustment to reduce the sensitivity of residential mortgage RWA to the economic cycle (and hence reduce the risk of pro-cyclical stress on capital ratios) is particularly welcome though I believe there is much more to do on this general issue
  • The support for Lender’s Mortgage Insurance (LMI) is also welcome

Areas where I believe the proposed revised capital framework could be improved (or at least benefit from some more thought):

  • The discussion of relative standardised and IRB RW does not address the fact IRB banks are required to hold additional capital to cover any shortfall between loan loss provisions and Regulatory Expected Loss (REL)
  • Residential mortgage portfolios subject to the standardised approach should be subject to a minimum average RW in the same way that IRB portfolios are currently constrained by the 25% floor
  • Applying a fixed scalar to Credit RWA can be problematic as the composition of the loan portfolio continues to evolve

The discussion of comparative IRB and Standardised RW you typically encounter seems to assume that the two approaches are identical in every aspect bar the RW but people working at the coal face know that the nominal RW advantage the IRB banks have has been partly offset by a higher exposure measure the RW are applied to. It appears that APRA’s proposed revisions will partly address this inconsistency by requiring banks using the Standardised Approach to apply a 100% Credit Conversion Factor (CCF) to undrawn loan limits.  IRB banks are also required to take a Common Equity Tier 1 deductions for the shortfall between their loan loss provisions and REL. The proposed revisions do nothing to address this area of inconsistency and in fact the Discussion Paper does not even acknowledge the issue.

Residential mortgage portfolios subject to the standardised approach should be subject to a minimum average RW in the same way that IRB portfolios are constrained. The majority of new residential mortgages are originated at relatively high LVR (most at 70% plus and a significant share at 80% plus), but the average LVR will be much lower as principal is repaid (and even more so if you allow for the appreciation of property values).  The introduction of a 20% RW bucket for standardised banks poses the question whether these banks will have an advantage in targeting the refinancing of seasoned loans with low LVR’s. The IRB banks would seek to retain these customers but they will still be constrained by the 25% average RW mandated by the FSI while the standardised banks face no comparable constraint.

This is unlikely to be an issue in the short term but one of the enduring lessons learned during my time “on the inside” is that banks (not just the big ones) are very good at identifying arbitrages and responding to incentives. It is widely recognised that housing loans have become the largest asset on Australian bank balance sheets (The Royal Commission issued a background paper that cited 42% of assets as at September 2017) but the share was significantly less when I started in banking. There has been a collection of complex drivers at play here (a topic for another post) but the relatively low RW has not harmed the growth of this kind of lending. Consequently, it is dangerous to assume that the status quo will persist if incentives exist to drive a different outcome.

This competitive imbalance could be addressed quite simply if the standardised banks were also subject to a requirement that their average RW was also no lower than 25% (or some alternative floor ratio that adjusted for the differences in exposure and REL noted above).

Another lesson learned “on the inside” is that fixed scalars look simple but are often not. They work fine when the portfolio of assets they are scaling up is stable but will gradually generate a different outcome to what was intended as the composition of the loan book evolves over time. I don’t have an easy solution to this problem but, if you must use them, it helps to recognise the potential for unintended consequence at the start.

Read on below if you have not read the Discussion Paper or want more detail on the revisions APRA has proposed and how these changes are proposed to be reconciled with the FSI recommendation. This is my first real post so feedback would be much appreciated.

Above all, tell me what I am missing … 

Tony

Note: The original version of this post published 22 February 2018 stated that inconsistent measurement of the exposures at default between the standardised and IRB approaches  was not addressed by APRA’s proposed revisions. I believe now that the proposed application of a 100% CCF in the Standardised Approach would in fact address one of the areas of inconsistency. The treatment of Regulatory Expected Loss remains an issue however. The post was revised on 24 February to clarify these points.

APRA’s proposals – in more detail

Good quality loans fully secured by mortgages on occupied residential property (either rented or occupied by the borrower) have been assigned concessionary risk weights (RW) ever since risk weighted capital adequacy ratios were introduced under Basel I (1988). The most concessionary risk weight was initially set at 50% and reduced to 35% in the Basel II Standardised Approach (2006).

APRA currently applies the concessionary 35% RW to standard eligible mortgages with Loan Valuation Ratios (LVR) of 80% or better (or up to 90% LVR if covered by Lender’s Mortgage Insurance) while the best case scenario for a non-standard mortgage is a 50% RW. Progressively higher RW (50/75/100) are applied for higher risk residential mortgages.

Under the Standardised Approach, APRA proposes:

  • The classification of a Standard Eligible Mortgage will distinguish between lowest risk “Owner-occupied P&I” and a higher risk “Other residential mortgages” category which is intended to be conceptually similar to the “material dependence” concept employed by Basel III to distinguish loans where repayment depends materially on the cash flows generated by the property securing the loan
  • 6 RW bands for each of these two types of residential mortgage (compared to 5 bands currently)
  • Standard Eligible Mortgages with lower LVR loans to be assigned lower RW but these loans must also meet defined serviceability, marketability and valuation criteria to qualify for the concessionary RW
  • The higher RW applied to “Other residential mortgages” may take the form of a fixed risk-weight schedule (per the indicative RW in Table 3 of the Discussion Paper) but might also be implemented via a multiplier, applied to the RW for owner-occupied P&I loans, which might vary over time “… depending on prevailing prudential or financial stability objectives or concerns”
  • Relatively lower capital requirements to continue to apply where loans are covered by LMI but its preferred approach is to apply a RW loading to loans with LVR in excess of 80% that are not insured (i.e. the indicative RW in Table 3 assume that LMI covers the high LVR loans)
  • Non-Standard residential mortgages should no longer benefit from any RW concession and be assigned a flat 100% RW irrespective of LVR and LMI

While the IRB requirements impacting residential mortgages are largely unchanged under Basel III, APRA proposes the following changes to the Australian IRB Approach to reflect local requirements and conditions:

  • Increased capital requirements for investment and interest-only exposures; to be implemented via a higher correlation factor for these loans
  • The (currently fixed) correlation factor applied to residential mortgages to be amended to depend on probability of default (PD); reflecting empirical evidence that “… the default risk of lower PD exposures is more dependent on the economic cycle  and can consequently increase at a relatively higher rate in a downturn”
  • A reduction in the minimum Loss Given Default (LGD) from 20% to 10% (subject to APRA approval of the LGD model); in order to facilitate “… better alignment of LGD estimates to key drivers of loss such as LVR and LMI”
  • Capital requirements for non-standard mortgages use the standardised approach; increasing consistency between the IRB an standardised approaches

APRA’s proposals seek to strike a balance between risk sensitivity and simplicity but must also take account of the FSI recommendations that ADI capital levels be unquestionably strong while also narrowing the difference between standardised and IRB RWs for residential mortgages. APRA is undertaking a Quantitative Impact Study (QIS) to better understand the impact of its proposals but the DP flagged that APRA does not expect the changes to correlation factors to meet its objectives for increased capital for residential mortgage exposures.

APRA could just further ramp up the correlation factor to generate the target IRB RW (which I assume continues to be 25%) but the DP notes that this would create undesirable inconsistencies with the correlation factors applied to other asset classes. Consequently, the DP indicates that the target increase in IRB RWA will likely be pursued via

  • A fixed multiplier (scalar) applied to total Credit RWA (i.e. althoughBasel III removes the 1.06 Credit RWA scalar, APRA is considering retaining a scalar with a value yet to be determined); and
  • If necessary, by applying additional specific RWA scalars for residential (and commercial) property.

These scalars will be subject to consultation with the industry and APRA has committed to review the 10.5% CET1 benchmark for unquestionably strong capital should the net result of the proposed revisions result in an overall increase in RWA’s relative to current methodologies.