The cleansing effect of banking crises …

… is the title of an interesting post on the Voxeu website summarising some research conducted by a group of European academics.

I have only skimmed the research at this point but the conclusion that realising losses and restructuring banks sets the economy up for stronger growth seem intuitively logical. It is also a timely area of research at a time when there seems to be widespread concern that many so called “zombie” companies are only continuing to operate by virtue of extraordinary levels of liquidity and other financial support being injected into the financial system via central banks.

The post summarises their findings as follows …

Our findings show that restructuring of distressed banks during a crisis has positive long-term effects on productivity. We emphasise the importance of long-term productivity considerations in the design of optimal bank resolution mechanisms. Our results indicate that the challenge is the inherent trade-off between the short- and the long-term effects, which can complicate the political economy of the problem. For instance, in the short term, bailouts can look appealing to government officials, especially if the long-term costs bear less weight in their decision-making processes.

“The cleansing effect of banking crises”- Reint Gropp, Steven Ongena, Jörg Rocholl, Vahid Saadi; Voxeu – 7 August 200

When safety proves dangerous …

… is the title of a post on the Farnham Street blog that provides a useful reminder of the problem of “risk compensation”; i.e. the way in which measures designed to make us safer can be a perverse prompt for us to take more risk because we feel safer. I want to explore how these ideas apply to bank capital requirements but will first outline the basic ideas covered by Farnham Street.

we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation.

The classic example is car safety measures such as improved tyres, ABS braking systems, seat belts and crumple zones designed to protect the driver and passengers. These have helped reduce car fatality rates for the people inside the car but not necessarily reduced accident rates given that drivers tend to drive faster and more aggressively because they can. Pedestrians are also at greater risk.

Farnham Street suggests the following lessons for dealing with the problem risk compensation:

  1. Safety measures are likely to be more effective is they are less visible
  2. Measures designed to promote prudent behaviour are likely to be more effective than measures which make risky behaviour safer
  3. Recognise that sometimes it is better to do nothing if the actions we take just leads to an offset in risk behaviour somewhere else
  4. If we do make changes then recognise that we may have to put in place other rules to ensure the offsetting risk compensating behaviour is controlled
  5. Finally (and a variation on #3), recognise that making people feel less safe can actually lead to safer behaviour.

If you are interested in this topic then I can also recommend Greg Ip’s book “Foolproof” which offers a good overview of the problem of risk compensation.

Applying these principles to bank capital requirements

The one area where I would take issue with the Farnham Street post is where it argues that bailouts and other protective mechanisms contributed to scale of the 2008 financial crisis because they led banks to take greater risks. There is no question that the scale of the crisis was amplified by the risks that banks took but it is less obvious to me that the bailouts created this problem.

The bailouts were a response to the problem that banks were too big to fail but I can’t see how they created this problem; especially given that the build up of risk preceded the bailouts. Bailouts were a response to the fact that the conventional bankruptcy and restructure process employed to deal with the failure of non-financial firms simply did not work for financial firms.

It is often asserted that bankers took risks because they expected that they would be bailed out; i.e/ that banks deliberately and consciously took risk on the basis that they would be bailed out. I can’t speak for banks as a whole but I have never witnessed that belief in the four decades that I worked in the Australian banking system. Never attribute to malice what can be equally explained by mistaken beliefs. I did see bankers placing excessive faith in the economic capital models that told them they could safely operate with reduced levels of capital. That illusion of knowledge and control is however a different problem altogether, largely to do with not properly understanding the distinction between risk and uncertainty (see here and here).

If I am right, that would suggest that making banks hold more capital might initially make them safer but might also lead to banks looking for ways to take more risk. This is a key reason why I think the answer to safer banks is not just making them hold higher and higher levels of common equity. More common equity is definitely a big part of the answer but one of the real innovations of Basel 3 was the development of new forms of loss absorbing capital that allow banks to be recapitalised by bail-in rather than bail-out.

If you want to go down the common equity is the only solution path then it will be important to ensure that Farnham Street Rule #4 above is respected; i.e. bank supervisors will need to ensure that banks do not simply end up taking risks in places that regulation or supervision does not cover. This is not a set and forget strategy based on the idea that increased “skin in the game” will automatically lead to better risk management.

Based on my experience, the risk of common equity ownership being diluted by the conversion of this “bail-in” capital is a far more effective constraint on risk taking than simply requiring banks to hold very large amounts of common equity. I think the Australian banking system has this balance about right. The Common Equity Tier 1 requirement is calibrated to a level intended to make banks “Unquestionably Strong”. Stress testing suggest that this level of capital is likely to be more than sufficient for well managed banks operating with sensible risk appetites but banks (the larger ones in particular) are also required to maintain a supplementary pool of capital that can be converted to common equity should it be required. The risk that this might be converted into a new pool of dilutive equity is a powerful incentive to not push the boundaries of risk appetite.

Tony – From the Outside

The power of ideas

This post was inspired by a paper by Dani Rodrik titled “When Ideas Trump Interests: Preferences, Worldviews, and Policy Innovations”. I have set out some more detailed notes here for the policy wonks but the paper is not light reading. The short version here attempts to highlight a couple of ideas I found especially interesting.

Rodrik starts by noting a tendency to interpret economic and social outcomes through the lens of “vested interests” while paying less attention to the ideas that underpin these outcomes. The vested interest approach looks for who benefits and how much power they have to explain outcomes. Rodrik does not dispute the relevance of understanding whose interests are in play when economic choices are being made but argues that “ideas” are an equally powerful motivating force.

Rodrik expresses his point this way:

“Ideas are strangely absent from modern models of political economy. In most prevailing theories of policy choice, the dominant role is instead played by “vested interests”—elites, lobbies, and rent-seeking groups which get their way at the expense of the general public. Economists, political scientists, and other social scientists appeal to the power of special interests to explain key puzzles in regulation, international trade, economic growth and development, puzzles in regulation, international trade, economic growth and development, and many other fields.”

“When Ideas Trump Interests: Preferences, Worldviews, and Policy Innovations” Dani Rodrik, Journal of Economic Perspectives—Volume 28, Number 1—Winter 2014—Pages 189–208

Applying this lens offers a broader and more nuanced perspective of how self and vested interest operates (emphasis added).

“… a focus on ideas provides us with a new perspective on vested interests too. As social constructivists like to put it, “interests are an idea.” Even if economic actors are driven purely by interests, they often have only a limited and preconceived idea of where their interests lie. This may be true in general, of course, but it is especially true in politics, where preferences are tightly linked to people’s sense of identity and new strategies can always be invented. What the economist typically treats as immutable self-interest is too often an artifact of ideas about who we are, how the world works, and what actions are available.”

Ibid

The importance of understanding how ideas drive public policy and personal choices resonates with me. One of the examples Rodrik used to illustrate his argument was bank regulation pre the GFC. Rodrik does not dispute that self and vested interests play a significant role but he explores the equally important role of ideas in shaping how interests are defined and pursued and the ways in which the models people use to understand the world shape their actions.

Applying this lens to bank regulation

Many observers … have argued that the policies that produced the crisis were the result of powerful banking and financial interests getting their way, which seems like a straightforward application of the theory of special interests.

But this begs the question why were banking vested interests allowed to get their way. The “vested interest” argument is “regulatory capture” but Rodrik offers an alternative explanation …

Still, without the wave of ideas “in the air” that favored financial liberalization and self-regulation and emphasized the impossibility (or undesirability) of government regulation, these vested interests would not have gotten nearly as much traction as they did. After all, powerful interests rarely get their way in a democracy by nakedly arguing for their own self-interest. Instead, they seek legitimacy for their arguments by saying these policies are in the public interest. The argument in favor of financial deregulation was not that it was good for Wall Street, but that it was good for Main Street.

Other observers have argued that the financial crisis was a result of excessive government intervention to support housing markets, especially for lower-income borrowers. These arguments were also grounded on certain ideas—about the social value of homeownership and the inattentiveness of the financial sector to those with lower incomes. Again, ideas apparently shaped politicians’ views of how the world works— and therefore their interest in acting in ways that precipitated the crisis.

I want to come back to this topic in another post. I have touched on the issue of self interest in an earlier post looking at a book by Samuel Bowles titled “The Moral Economy”. Rodrik’s paper offers another perspective on the issue as does his book “Economics Rules: Why Economics Works, When It Fails, and How To Tell The Difference”. I have some notes on a couple of other books including “The Economists’ Hour” by Binyamin Applebaum and The Value of Everything” by Mariana Mazzucato. All of these have something interesting to say but I want to think some more before attempting to say something.

Let me conclude for the moment with John Maynard Keynes (emphasis added …

“The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas. Not, indeed, immediately, but after a certain interval; for in the field of economic and political philosophy there are not many who are influenced by new theories after they are twenty-five or thirty years of age, so that the ideas which civil servants and politicians and even agitators apply to current events are not likely to be the newest. But, soon or late, it is ideas, not vested interests, which are dangerous for good or evil.”

The General Theory of Employment, Interest and Money, 1936

Tony (From the Outside)

What should count as bank capital?

My last post looked at a RBNZ consultation paper which addressed the question “How much capital is enough?”. The overall quantum of capital the RBNZ arrived at (16% of RWA plus) seemed reasonable but it was less obvious that relying almost entirely on CET1 was the right solution. That prompted me to revisit an earlier consultation paper in which the RBNZ set out its case for why it did not want contingent capital instruments to play a significant role in the capital structure of the banks it supervises. This post explores the arguments the RBNZ marshals to support its position as part of a broader exploration of the debate over what counts as capital.

The traditional approach to this question assumes that common equity is unquestionably the best form of capital from the perspective of loss absorption. Consequently, the extent to which alternative forms of funding count as capital is judged by common equity benchmarks; e.g. the extent to which the funding is a permanent commitment (i.e. no maturity date) and the returns paid to investors depend on the profitability or capacity of the company to pay (failure to pay is not an event of default).

There is no dispute that tangible common equity unquestionably absorbs loss and is the foundation of any company’s capital structure but I believe contingent convertible capital instruments do potentially add something useful to the bank capital management toolkit. I will attempt to make the case that a foundation of common equity, supplemented with some debt that converts to common equity if required, is better than a capital structure comprised solely or largely of common equity.

The essence of my argument is that there is a point in the capital structure where adding contingent convertible instruments enhances market discipline relative to just adding more common equity. The RBNZ discusses the potential value of these structures in their consultation paper:

49. The theoretical literature on contingent debt explores how these instruments might reduce risk (i.e. lower the probability of insolvency) for an individual bank.  

50. Two effects have been identified. Firstly, adding contingent debt to a bank’s balance sheet directly increases the loss absorbing potential of the bank, relative to issuing pure debt (but not relative to acquiring more common equity). This follows directly from the fact that removing the debt is an essential part of every contingent debt instrument. Secondly, depending on the terms, contingent capital may cause bank management to target a lower level of risk (incentive effects). In other words, in theory, a contingent debt instrument both reduces the probability a bank will incur losses and absorbs losses that do eventuate. Because of both these factors, contingent debt is expected, in theory, to reduce the risk of bank failure.  

51. Focusing on the second of these effects, management incentives, it matters whether, when the debt is written off, holders are compensated in the form of newly issued shares (“conversion”). If conversion is on such a scale as to threaten existing shareholders with a loss of control of the bank, it will be optimal for bank management to target a lower level of risk exposure for a given set of circumstances than would have been the case otherwise. For example, bank management may be less tolerant of asset volatility, and more likely to issue new equity to existing shareholders, when capital is low rather than risk triggering conversion.”

RBNZ Capital Review Paper 2: What should qualify as bank capital? Issues and Options (para 49 – 51) – Emphasis added

So the RBNZ does recognise the potential value of contingent debt instruments which convert into common equity but chose to downplay the benefits while placing much greater weight on a series of concerns it identified.

What’s in a name – The RBNZ Taxonomy of Capital

Before digging into the detail of the RBNZ concerns, it will be helpful to first clarify terminology. I am using the term Contingent Convertible Instruments for my preferred form of supplementary capital whereas much of the RBNZ paper focuses on what it refers to as “Contingent debt instruments“, which it defines in part as “debt that absorbs loss via write-off, which may or may not be followed by conversion”.

I had not picked this up on my first read of the RBNZ paper but came to realise we are talking slightly at cross purposes. The key words to note are “contingent” and “convertible”.

  • The “contingent” part of these instruments is non-negotiable if they are to be accepted as bank regulatory capital. The contingency is either a “non-viability event” (e.g. the supervisor determines that the bank must increase common equity to remain viable) or a CET1 ratio of 5.125% or less (what APRA terms a “loss absorption trigger” and the RBNZ refers to as a “going-concern trigger”)
  • “Conversion” however is optional. Loss absorption is non-negotiable for bank regulatory capital but it can be achieved in two ways. I have argued that loss absorption is best achieved by converting these capital instruments into common equity but prudential regulation is satisfied so long as the instruments are written-off.

I had taken it as given that these instruments would be convertible but the RBNZ places more emphasis on the possibility that conversion “may or may not” follow write-off. Small point but worth noting when evaluating the arguments.

Why does conversion matter?

The RBNZ understandably focuses on the write-off part of the loss absorption process whereas I focus on conversion because it is essential to preserving a loss hierarchy that allocates losses to common equity in the first instance. If we ignore for a moment the impact of bail-in (either by conversion or write-off), the order in which losses are applied to the various sources of funding employed by a bank follows this loss hierarchy:

  • Going Concern:
    • Common Equity Tier 1 (CET1)
    • Additional Tier 1 (AT1)
  • Insolvency – Liquidation or restructuring:
    • Tier 2 (T2)
    • Senior unsecured
    • Super senior
      • Covered bonds
      • Deposits
      • Insured deposits

Under bail-in, writing off a contingent capital instrument generates an increase in common equity that accrues to the existing ordinary shareholders thereby negating the traditional loss hierarchy that requires common equity to be exhausted before more senior instruments can be required to absorb loss.

Conversion is a far better way to effect loss absorption because ordinary shareholders still bear the brunt of any loss, albeit indirectly via the dilution of their shareholding (and associated share price losses). In theory, conversion shields the AT1 investors from loss absorption because they receive common equity equivalent in value to the book value of their claim on the issuer. In practice, it is less clear that the AT1 investors will be able to sell the shares received at the conversion price or better but they are still better off than if they had simply seen the value of their investment written-off. If you are interested in digging deeper, this post looks at how loss absorption works under bail-in.

The RBNZ does recognise this dynamic but still chose to reject these advantages so it is time to look at their concerns.

RBNZ concerns with contingent capital

The RBNZ identified six concerns to justify its in principle decision to exclude the use of contingent capital instruments in the NZ capital adequacy framework.

  1. Possible under-estimation of the tax effects of contingent debt
  2. Reliance on parent entities as purchasers of AT1 contingent debt
  3. Not suitable for retail investors
  4. Banks structured as mutual societies cannot offer contingent debt that includes conversion into common equity
  5. Potential for regulatory arbitrage arising from the tension between tax and capital regulation
  6. Difficulties with exercising regulatory oversight of contingent debt

I don’t imagine the RBNZ is much concerned with my opinion but I don’t find the first three concerns to be compelling. I set out my reasons later in the post but will focus for the moment on three issues that I think do bear deeper consideration. You do not necessarily have to agree with the RBNZ assessment, or the weight they assign to them, but I believe these concerns must be addressed if we are to make the case for contingent debt.

Stronger arguments against contingent debt

1) Contingent debt gives the larger, listed banks a competitive advantage over mutual societies that are unable to issue ordinary shares

The RBNZ notes that all New Zealand banks are able to issue a version of contingent debt that qualifies as capital, but that some types of banks may have access to a broader – and cheaper – range of capital opportunities than others. The current definition of capital is thus in part responsible for a somewhat uneven playing field.

The primary concern seems to be banks structured as mutual societies which are unable to issue ordinary shares. They cannot offer contingent debt that includes conversion and must rely on the relatively more expensive option of writing-off of the debt to effect loss absorption.

I think this is a reasonable concern but I also believe there may be ways to deal with it. One option is for these banks to issue Mutual Equity Interests as has been proposed in Australia. Another option (also based on an Australian proposal) is that the increased requirements for loss absorbing capital be confined to the banks which cannot credibly be allowed to fail or be resolved in any other way. I recognise that this option benefits from the existence of deposit insurance which NZ has thus far rejected.

I need to do bit more research on this topic so I plan to revisit the way we deal with small banks, and mutuals in particular, in a future post.

2) Economic welfare losses due to regulatory arbitrage opportunities in the context of contingent debt

The tax treatment of payments to security holders is one of the basic tests for determining if the security is debt or equity but contingent debt instruments don’t fall neatly into either box. The conversion terms tied to PONV triggers make the instruments equity like when the issuer is under financial stress while the contractual nature of the payments to security holders makes them appear more debt like under normal operating conditions.

I can see a valid prudential concern but only to the extent the debt like features the tax authority relied on in making its determination regarding tax-deductibility somehow undermined the ability of the instrument to absorb loss when required.

There have been instances where securities have been mis-sold to unsophisticated investors (the Monte dei Paschi di Sienna example cited by the RBNZ is a case in point) but it is less obvious that retail investment by itself is sufficient cause to rule out this form of capital.

The only real difference I see over conventional forms of debt is the line where their equity like features come into play. Conventional debt is only ever at risk of loss absorption in the event of bankruptcy where its seniority in the loss hierarchy will determine the extent to which the debt is repaid in full. These new forms of bank capital bring forward the point at which a bank balance sheet can be restructured to address the risk that the restructuring undermines confidence in the bank. The economics of the restructuring are analogous so long as losses are allocated by conversion rather than by write-off alone.

3) Difficulties experienced with the regulatory oversight of contingent debt

Possibly their core concern is that overseeing instrument compliance is a complex and resource-intensive process that the RBNZ believes does not fit well with its regulatory model that emphasises self-discipline and market discipline. The RBNZ highlights two concerns in particular.

  • Firstly the RBNZ has chosen to respond to the challenge of vetting these instruments by instituting a “non-objection process” that places the onus on issuers to confirm that their instruments comply with the capital adequacy requirements.
  • Secondly, notwithstanding the non objection process, the added complexity of the instruments relative to common equity, still requires significant call on prudential resources.

This I think, is the strongest objection the RBNZ raises against contingent debt. Contingent debt securities are clearly more complex than common equity so the RBNZ quite reasonably argues that they need to bring something extra to the table to justify the time, effort and risk associated with them. There is virtually no justification for them if they do, as the RBNZ asserts, work against the principles of self and market discipline that underpin its regulatory philosophy.

Three not so compelling reasons for restricting the use of contingent capital instruments (“in my humble opinion’)

1) Possible under-estimation of the tax effects of contingent debt

The first concern relates to the RBNZ requirement that banks must acknowledge any potential tax implications arising from contingent debt and reflect these potential “tax offsets” in the reported value of capital. Banks are required to obtain a binding ruling from the NZ tax authority (or voluntarily take a tax ”haircut”). The RBNZ acknowledges that a binding ruling can provide comfort that tax is fully accounted for under prudential requirements, but quite reasonably argues that this will only be the case if the ruling that is sought is appropriately specified so as to capture all relevant circumstances.

The RBNZ’s specific concern seems to be what happens when no shares are issued in the event of the contingent loss absorption feature being triggered and hence no consideration is paid to investors in exchange for writing off their debt claim. The bank has made a gain that in principle would create a tax lability but it also seems reasonable to assume that the write off could only occur if the bank was incurring material losses. It follows then that the contingent tax liability created by the write off is highly likely to be set off against the tax losses such that there is no tax to pay.

I am not a tax expert so I may well be missing something but I can’t see a practical risk here. Even in the seemingly unlikely event that there is a tax payment, the money represents a windfall gain for the public purse. That said, I recognise that the reader must still accept my argument regarding the value of having the conversion option to consider it worth dealing with the added complexity.

2) A reliance on parent entities as purchasers of AT1 contingent debt

I and the RBNZ both agree that one of the key planks in the case for accepting contingent debt as bank capital is the beneficial impact on bank risk taking generated by the risk of dilution but the RBNZ argues this beneficial impact is less than it could be when the instrument is issued by a NZ subsidiary to its publicly listed parent.

I may be missing something here but the parent is exposed to dilution if the Non-Viability or Going Concern triggers are hit so I can’t see how that reduces the incentive to control risk unless the suggestion is that NZ management will somehow have the freedom to pursue risky business strategies with no input from their ultimate owners.

3) Retail investors have acquired contingent debt

The RBNZ cites some statistical evidence that suggests that, in contrast to the experience overseas, there appears to be limited uptake by wholesale investors of contingent debt issued by the big four banks. This prompts them to question whether the terms being offered on instruments issued outside the parent group are not sufficiently attractive for sophisticated investors. This concern seems to be predicated on the view that retail will always be the least sophisticated investors so banks will seek to take advantage of their relative lack of knowledge.

It is arguably true that retail investors will tend be less sophisticated than wholesale investors but that should not in itself lead to the conclusion that any issue targeted at retail is a cynical attempt at exploitation or that retail might legitimately value something differently to the way other investors do. The extent that the structures issued by the Australian parents have thus far concentrated on retail, for example, might equally be explained by the payment of franking credit that was more highly valued by the retail segment. Offshore institutions might also have been negative on the Australian market therefore pushing Australian banks to focus their efforts in the domestic market.

I retain an open mind on this question and need to dig a bit deeper but I don’t see how the fact that retail investment dominates the demand for these structures at a point in time can be construed to be proof that they are being mis-sold.

The RBNZ’s answer ultimately lies in their regulatory philosophy

The reason that the RBNZ rejects the use of these forms of supplementary capital ultimately appears to lie in its regulatory philosophy which is based on the following principles

  • Self discipline on the part of the financial institutions they supervise
  • Market discipline
  • Deliberately conservative
  • Simplicity

The RBNZ also acknowledges the value of adopting BCBS consistent standards but this is not a guiding principle. It reserves the right to adapt them to local needs and, in particular, to be more conservative. It should also be noted that the RBNZ has quite deliberately rejected adopting deposit insurance on the grounds (as I understand it) that this encourages moral hazard. They take this a step further by foregoing any depositor preference in the loss hierarchy and by a unique policy of Open Bank Resolution (OBR) under which deposits are explicitly included in the liabilities which can be written down in need to assist in the recapitalisation of an insolvent bank.

In theory, the RBNZ might have embraced contingent convertible instruments on the basis of their consistency with the principles of self and market discipline. The threat of dilution via conversion of the instrument into common equity creates powerful incentives not just for management to limit excessive risk taking but also for the investors to exert market discipline where they perceive that management is not exercising self-discipline.

In practice, the RBNZ seems to have discounted this benefit on the grounds that that there is too much risk, either by design or by some operational failure, that these instruments might not convert to common equity. They also seem quite concerned with structures that eschew conversion (i.e. loss absorption effected by write-off alone) but they could have just excluded these instruments rather than a blanket ban. Having largely discounted or disregarded the potential benefit, the principles of deliberate conservatism and simplicity dictate their proposed policy position, common equity rules.

Summing up

This post only scratches the surface of this topic. My key point is that contingent convertible capital instruments potentially add something useful to the bank capital management toolkit compared to relying entirely on common equity. The RBNZ acknowledge the potential upside but ultimately argue that the concerns they identify outweigh the potential benefits. I have reviewed their six concerns in this post but need to do a bit more work to gain comfort that I am not missing something and that my belief in the value of bail-in based capital instruments is justified.

Tony

Mortgage risk weights – fact check

It is frequently asserted that the major Australian banks have been “gifted” a substantially lower mortgage risk weight than the smaller banks. To be precise, the assertion is that the major banks are only required to hold capital based on a 25% risk weight versus 39% for smaller banks.

If you are not familiar with the arcane detail of bank capital adequacy, then you could be forgiven for concluding that this differential (small banks apparently required to hold 56% more capital for the same risk) is outrageous and unfair. While the risk weights for big banks are certainly lower on average than those required of small banks, I believe the difference in capital requirements is not as large as the simple comparison of risk weights suggests.

Bank capital requirements involve more than risk weights

To understand why this comparison of risk weights is misleading, it will be helpful to start with a quick primer on bank capital requirements. The topic can be hugely complex but, reduced to its essence, there are three elements that drive the amount of capital a bank holds:

  1. The risk weights applied to its assets
  2. The target capital ratio applied to those risk weighted assets
  3. Any capital deductions required when calculating the capital ratio

Problem 1 – Capital adequacy ratios differ

The comparison of capital requirements based on risk weights implicitly assumes that the regulator applies the same capital ratio requirement to all banks, but this is not the case. Big banks are targeting CET1 ratios based on the 10.5% Unquestionably Strong benchmark set by APRA while there is a greater range of practice amongst the smaller banks. Bendigo and Suncorp appear to be targeting a CET1 ratio in the range of 8.5 to 9.0% while the smaller of the small banks appear to be targeting CET1 ratios materially higher (say 15% or more).

If we confine the comparison to the alleged disadvantage suffered by Bendigo and Suncorp, then the higher risk weights they are required to apply to residential mortgages is substantially offset by the lower CET1 target ratios that they target (the 56% difference in capital required shrinks to something in the order of 30% if you adjust for the difference in target CET1 ratios).

Broadening the comparison to the smaller banks gets even more interesting. At face value the much higher CET1 ratios they appear to target suggest that they are doubly penalised in the required capital comparison but you have to ask why are they targeting such high CET1 ratios. One possible explanation is that the small less diversified mortgage exposures are in fact more risky than the more diversified exposures maintained by their larger competitors.

Problem 2 – You have to include capital deductions

This is quite technical I recognise but, in addition to the capital tied to the risk weight, the big banks are also required to hold capital for a capital deduction linked to the difference between their loan loss provisions and a regulatory capital value called “Regulatory Expected Loss”. This capital deduction increases the effective risk weight. The exact amount varies from bank to bank but I believe it increases the effective capital requirement by 10-12% (I.e. an effective RW closer to 28%). My understanding is that small banks are not required to make the same capital deduction.

Problem 3 – The Standardised risk weights for residential mortgages seem set to change

A complete discussion of the RW difference should also take account of the fact that APRA has proposed to introduce lower RW Categories for the smaller banks such their average RW may be lower than 39% in the future. I don’t know what the average RW for small banks would be under these new RW but that is a question you could put to the banks who use the 39% figure without acknowledging this fact.

Problem 4 – The risk of a mortgage depends on the portfolio not the individual loan

The statement that a loan is the same risk irrespective of whether it is written by a big bank or small bank sounds intuitively logical but is not correct. The risk of a loan can only be understood when it is considered as part of the portfolio the bank holds. Small banks will typically be less diversified than a big bank.

Problem 5 – What about the capital required for Interest Rate Risk in the Banking Book (IRRBB)?

I don’t have sufficient data to assess how significant this is, but intuitively I would expect that the capital that the major banks are required to hold for IRRBB will further narrow the effective difference between the risk weights applied to residential mortgages.

Summing up

My aim in this post was not to defend the big banks but rather to try to contribute some of the knowledge I have acquired working in this area to what I think is an important but misunderstood question. In the interests of full disclosure, I have worked for one of the large Australian banks and may continue to do work for them in the future.

On a pure risk basis, it seems to me that the loan portfolio of a large bank will tend to be more diversified, and hence lower risk, than that of a smaller bank. It is not a “gift” for risk weights to reflect this.

There is a legitimate debate to be had regarding whether small banks should be given (gifted?) an advantage that helps them compete against the big banks. That debate however should start with a proper understanding of the facts about how much advantage the large banks really have and the extent to which their lower risk weights reflect lower risk.

If you disagree tell me what I am missing …

Canada innovates in the capital buffer space

The Canadian prudential regulator (OFSI) has made an interesting contribution to the capital buffer space via its introduction of a Domestic Stability Buffer (DSB).

Key features of the Domestic Stability Buffer:

  • Applies only to Domestic Systemically Important Banks (D-SIB) and intended to cover a range of systemic vulnerabilities not captured by the Pillar 1 requirement
  • Vulnerabilities currently included in the buffer include (i) Canadian consumer indebtedness; (ii) asset imbalances in the Canadian market and (iii) Canadian institutional indebtedness
  • Replaces a previously undisclosed Pillar 2 loading associated with this class of risks (individual banks may still be required to hold a Pillar 2 buffer for idiosyncratic risks)
  • Initially set at 1.5% of Total RWA and will be in the range of 0 to 2.5%
  • Reviewed semi annually (June and December); with the option to change more frequently in exceptional circumstances
  • Increases phased in while decreases take effect immediately

Implications for capital planning:

  • DSB supplements the Pillar 1 buffers (Capital Conservation Buffer, D-SIB surcharge and the Countercyclical Buffer)
  • Consequently, the DSB will not result in banks being subject to the automatic constraints on capital distributions that are applied by the Pillar 1 buffers
  • Banks will be required to disclose that the buffer has been breached and the OFSI will require a remediation plan to restore the buffer

What is interesting:

  • The OFSI argues that translating the existing Pillar 2 requirement into an explicit buffer offers greater transparency which in turn “… will support banks’ ability to use this capital buffer in times of stress by increasing the market’s understanding of the purpose of the buffer and how it should be used”
  • I buy the OFSI rationale for why an explicit buffer with a clear narrative is a more usable capital tool than an undisclosed Pillar 2 requirement with the same underlying rationale
  • The OFSI retains a separate Countercyclical Buffer but this Domestic Stability Buffer seems similar but not identical in its over-riding purpose (to me at least) to the approach that the Bank of England (BoE) has adopted for managing the Countercyclical Buffer.
  • A distinguishing feature of both the BoE and OFSI approaches is linking the buffer to a simple, coherent narrative that makes the buffer more usable by virtue of creating clear expectations of the conditions under which the buffer can be used.

Bottom line is that I see useful features in both the BoE and OFSI approach to dealing with the inherent cyclicality of banking.  I don’t see  either of the proposals doing much to mitigate the cyclicality of banking but I do see them offering more potential for managing the consequences of that cyclicality. Both approaches seem to me to offer material improvements over the Countercyclical Buffer as originally conceived by the BCBS.

It will be interesting to see if APRA chooses to adapt elements of this counter cyclical approach to bank capital requirements.

If I am missing something, please let me know …

From the Outside

The financial cycle and macroeconomics: What have we learnt? BIS Working Paper

Claudio Borio at the BIS wrote an interesting paper exploring the “financial cycle”. This post seeks to summarise the key points of the paper and draw out some implications for bank stress testing (the original paper can be found here).  The paper was published in December 2012, so its discussion of the implications for macroeconomic modelling may be dated but I believe it continues to have some useful insights for the challenges banks face in dealing with adverse economic conditions and the boundary between risk and uncertainty.

Key observations Borio makes regarding the Financial Cycle

The concept of a “business cycle”, in the sense of there being a regular occurrence of peaks and troughs in business activity, is widely known but the concept of a “financial cycle” is a distinct variation on this theme that is possibly less well understood. Borio states that there is no consensus definition but he uses the term to

“denote self-reinforcing interactions between perceptions of value and risk, attitudes towards risk and financing constraints, which translate into booms followed by busts. These interactions can amplify economic fluctuations and possibly lead to serious financial distress and economic disruption”.

This definition is closely related to the concept of “procyclicality” in the financial system and should not be confused with a generic description of cycles in economic activity and asset prices. Borio does not use these words but I have seen the term “balance sheet recession” employed to describe much the same phenomenon as Borio’s financial cycle.

Borio identifies five features that describe the Financial Cycle

  1. It is best captured by the joint behaviour of credit and property prices – these variables tend to closely co-vary, especially at low frequencies, reflecting the importance of credit in the financing of construction and the purchase of property.
  2. It is much longer, and has a much larger amplitude, than the traditional business cycle – the business cycle involves frequencies from 1 to 8 years whereas the average length of the financial cycle is longer; Borio cites a cycle length of 16 years in a study of seven industrialised economies and I have seen other studies indicating a longer cycle (with more severe impacts).
  3. It is closely associated with systemic banking crises which tend to occur close to its peak.
  4. It permits the identification of the risks of future financial crises in real time and with a good lead – Borio states that the most promising leading indicators of financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms.
  5. And it is highly dependent of the financial, monetary and real-economy policy regimes in place (e.g. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy).

Macro economic modelling

Borio also argues that the conventional models used to analyse the economy are deficient because they do not capture the dynamics of the financial cycle. These extracts capture the main points of his critique:

“The notion… of financial booms followed by busts, actually predates the much more common and influential one of the business cycle …. But for most of the postwar period it fell out of favour. It featured, more or less prominently, only in the accounts of economists outside the mainstream (eg, Minsky (1982) and Kindleberger (2000)). Indeed, financial factors in general progressively disappeared from macroeconomists’ radar screen. Finance came to be seen effectively as a veil – a factor that, as a first approximation, could be ignored when seeking to understand business fluctuations … And when included at all, it would at most enhance the persistence of the impact of economic shocks that buffet the economy, delaying slightly its natural return to the steady state …”

“Economists are now trying hard to incorporate financial factors into standard macroeconomic models. However, the prevailing, in fact almost exclusive, strategy is a conservative one. It is to graft additional so-called financial “frictions” on otherwise fully well behaved equilibrium macroeconomic models, built on real-business-cycle foundations and augmented with nominal rigidities. The approach is firmly anchored in the New Keynesian Dynamic Stochastic General Equilibrium (DSGE) paradigm.”

“The purpose of this essay is to summarise what we think we have learnt about the financial cycle over the last ten years or so in order to identify the most promising way forward…. The main thesis is that …it is simply not possible to understand business fluctuations and their policy challenges without understanding the financial cycle”

There is an interesting discussion of the public policy (i.e. prudential, fiscal, monetary) associated with recognising the role of the financial cycle but I will focus on what implications this may have for bank management in general and stress testing in particular.

Insights and questions we can derive from the paper

The observation that financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms covers much the same ground as the Basel Committee’s Countercyclical Capital Buffer (CCyB) and is something banks would already monitor as part of the ICAAP. The interesting question the paper poses for me is the extent to which stress testing (and ICAAP) should focus on a “financial cycle” style disruption as opposed to a business cycle event. Even more interesting is the question of whether the higher severity of the financial cycle is simply an exogenous random variable or an endogenous factor that can be attributed to excessive credit growth. 

I think this matters because it has implications for how banks calibrate their overall risk appetite. The severity of the downturns employed in stress testing has in my experience gradually increased over successive iterations. My recollection is that this has partly been a response to prudential stress tests which were more severe in some respects than might have been determined internally. In the absence of any objective absolute measure of what was severe, it probably made sense to turn up the dial on severity in places to align as far as possible the internal benchmark scenarios with prudential benchmarks such as the “Common Scenario” APRA employs.

At the risk of a gross over simplification, I think that banks started the stress testing process looking at both moderate downturns (e.g. 7-10 year frequency and relatively short duration) and severe recessions (say a 25 year cycle though still relatively short duration downturn). Bank supervisors  in contrast have tended to focus more on severe recession and financial cycle style severity scenarios with more extended durations. Banks’s have progressively shifted their attention to scenarios that are more closely aligned to the severe recession assumed by supervisors in part because moderate recessions tend to be fairly manageable from a capital management perspective.

Why does the distinction between the business cycle and the financial cycle matter?

Business cycle fluctuations (in stress testing terms a “moderate recession”) are arguably an inherent feature of the economy that occur largely independently of the business strategy and risk appetite choices that banks make. However, Borio’s analysis suggests that the decisions that banks make (in particular the rate of growth in credit relative to growth in GDP and the extent to which the extension of bank credit contributes to inflated asset values) do contribute to the risk (i.e. probability, severity and duration) of a severe financial cycle style recession. 

Borio’s analysis also offers a way of thinking about the nature of the recovery from a recession. A moderate business cycle style recession is typically assumed to be short with a relatively quick recovery whereas financial cycle style recessions typically persist for some time. The more drawn out recovery from a financial cycle style recession can be explained by the need for borrowers to deleverage and repair their balance sheets as part of the process of addressing the structural imbalances that caused the downturn.

If the observations above are true, then they suggest a few things to consider:

  • should banks explore a more dynamic approach to risk appetite limits that incorporated the metrics identified by Borio (and also used in the calibration of the CCyB) so that the level of risk they are willing to take adjusts for where they believe they are in the state of the cycle (and which kind of cycle we are in)
  • how should banks think about these more severe financial cycle losses? Their measure of Expected Loss should clearly incorporate the losses expected from business cycle style moderate recessions occurring once every 7-10 years but it is less clear that the kinds of more severe and drawn out losses expected under a Severe Recession or Financial Cycle downturn should be part of Expected Loss.

A more dynamic approach to risk appetite get us into some interesting game theory  puzzles because a decision by one bank to pull back on risk appetite potentially allows competitors to benefit by writing more business and potentially doubly benefiting to the extent that the decision to pull back makes it safer for competitors to write the business without fear of a severe recession (in technical economist speak we have a “collective action” problem). This was similar to the problem APRA faced when it decided to impose “speed limits” on certain types of lending in 2017. The Royal Commission was not especially sympathetic to the strategic bind banks face but I suspect that APRA understand the problem.

How do shareholders think about these business and financial cycle losses? Some investors will adopt a “risk on-risk off” approach in which they attempt to predict the downturn and trade in and out based on that view, other “buy and hold” investors (especially retail) may be unable or unwilling to adopt a trading approach.

The dependence of the financial cycle on the fiscal and monetary policy regimes in place and changes in the real-economy also has potential implications for how banks think about the risk of adverse scenarios playing out. Many of the factors that Borio argues have contributed to the financial cycle (i.e. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy) are reversing (regulation of banks is much more restrictive, monetary policy appears to have recognised the limitations of a narrow inflation target focus and the pace of globalisation appears to be slowing in response to a growing concern that its benefits are not shared equitably). I am not sure exactly what these changes mean other than to recognise that they should in principle have some impact. At a minimum it seems that the pace of credit expansion might be slower in the coming decades than it has in the past 30 years.

All in all, I find myself regularly revisiting this paper, referring to it or employing the distinction between the business and financial cycle. I would recommend it to anyone interested in bank capital management. 

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

Looking under the hood – The IRB formula

This post is irredeemably technical so stop here if that is not your interest. If you need to understand some of the mechanics of the formula used to calculate credit risk weighted assets under the advanced Internal Ratings Based (IRB) approach then the BCBS published a paper in 2005 which offers an explanation:

  • describing the economic foundations
  • as well as the underlying mathematical model and its input parameters.

While a lot has changed as a result of Basel III, the models underlying the calculation of Internal Rating Based Capital (IRB) requirements are still based on the core principles agreed under Basel II that are explained in this BCBS paper.

The notes in the linked page below mostly summarise the July 2005 paper with some emphasis (bolded text) and comments (in italics) that I have added. The paper is a bit technical but worth reading if you want to understand the original thinking behind the Basel II risk weights for credit risk.

I initially found the paper useful for revisiting the foundation assumptions of the IRB framework as background to considering the regulatory treatment of Expected Loss as banks transition to IFRS9. The background on how the RW was initially intended to cover both Expected and Unexpected Loss, but was revised such that capital was only required to cover Unexpected Loss, is especially useful when considering the interaction of loan loss provisioning with capital requirements.

Reading the BCBS paper has also been useful for thinking through a range of related issues including:

  • The rationale for, and impact of, prudential conservatism in setting the risk parameters used in the IRB formula
  • The cyclicality of a risk sensitive capital requirement (and potential for pro cyclicality) and what might be done to mitigate the risk of pro-cyclical impacts on the economy

If you have read this far then my summary of the BCBS paper and my comments /observations can be found here (and thank you).

I am not a credit risk model expert, so the summary of the paper and my comments must be read with that in mind. I did this to help me think through some of the issues with bank capital adequacy. Hopefully others will find the notes useful. If you see something wrong or something you disagree with then let me know.