What should count as bank capital?

My last post looked at a RBNZ consultation paper which addressed the question “How much capital is enough?”. The overall quantum of capital the RBNZ arrived at (16% of RWA plus) seemed reasonable but it was less obvious that relying almost entirely on CET1 was the right solution. That prompted me to revisit an earlier consultation paper in which the RBNZ set out its case for why it did not want contingent capital instruments to play a significant role in the capital structure of the banks it supervises. This post explores the arguments the RBNZ marshals to support its position as part of a broader exploration of the debate over what counts as capital.

The traditional approach to this question assumes that common equity is unquestionably the best form of capital from the perspective of loss absorption. Consequently, the extent to which alternative forms of funding count as capital is judged by common equity benchmarks; e.g. the extent to which the funding is a permanent commitment (i.e. no maturity date) and the returns paid to investors depend on the profitability or capacity of the company to pay (failure to pay is not an event of default).

There is no dispute that tangible common equity unquestionably absorbs loss and is the foundation of any company’s capital structure but I believe contingent convertible capital instruments do potentially add something useful to the bank capital management toolkit. I will attempt to make the case that a foundation of common equity, supplemented with some debt that converts to common equity if required, is better than a capital structure comprised solely or largely of common equity.

The essence of my argument is that there is a point in the capital structure where adding contingent convertible instruments enhances market discipline relative to just adding more common equity. The RBNZ discusses the potential value of these structures in their consultation paper:

49. The theoretical literature on contingent debt explores how these instruments might reduce risk (i.e. lower the probability of insolvency) for an individual bank.  

50. Two effects have been identified. Firstly, adding contingent debt to a bank’s balance sheet directly increases the loss absorbing potential of the bank, relative to issuing pure debt (but not relative to acquiring more common equity). This follows directly from the fact that removing the debt is an essential part of every contingent debt instrument. Secondly, depending on the terms, contingent capital may cause bank management to target a lower level of risk (incentive effects). In other words, in theory, a contingent debt instrument both reduces the probability a bank will incur losses and absorbs losses that do eventuate. Because of both these factors, contingent debt is expected, in theory, to reduce the risk of bank failure.  

51. Focusing on the second of these effects, management incentives, it matters whether, when the debt is written off, holders are compensated in the form of newly issued shares (“conversion”). If conversion is on such a scale as to threaten existing shareholders with a loss of control of the bank, it will be optimal for bank management to target a lower level of risk exposure for a given set of circumstances than would have been the case otherwise. For example, bank management may be less tolerant of asset volatility, and more likely to issue new equity to existing shareholders, when capital is low rather than risk triggering conversion.”

RBNZ Capital Review Paper 2: What should qualify as bank capital? Issues and Options (para 49 – 51) – Emphasis added

So the RBNZ does recognise the potential value of contingent debt instruments which convert into common equity but chose to downplay the benefits while placing much greater weight on a series of concerns it identified.

What’s in a name – The RBNZ Taxonomy of Capital

Before digging into the detail of the RBNZ concerns, it will be helpful to first clarify terminology. I am using the term Contingent Convertible Instruments for my preferred form of supplementary capital whereas much of the RBNZ paper focuses on what it refers to as “Contingent debt instruments“, which it defines in part as “debt that absorbs loss via write-off, which may or may not be followed by conversion”.

I had not picked this up on my first read of the RBNZ paper but came to realise we are talking slightly at cross purposes. The key words to note are “contingent” and “convertible”.

  • The “contingent” part of these instruments is non-negotiable if they are to be accepted as bank regulatory capital. The contingency is either a “non-viability event” (e.g. the supervisor determines that the bank must increase common equity to remain viable) or a CET1 ratio of 5.125% or less (what APRA terms a “loss absorption trigger” and the RBNZ refers to as a “going-concern trigger”)
  • “Conversion” however is optional. Loss absorption is non-negotiable for bank regulatory capital but it can be achieved in two ways. I have argued that loss absorption is best achieved by converting these capital instruments into common equity but prudential regulation is satisfied so long as the instruments are written-off.

I had taken it as given that these instruments would be convertible but the RBNZ places more emphasis on the possibility that conversion “may or may not” follow write-off. Small point but worth noting when evaluating the arguments.

Why does conversion matter?

The RBNZ understandably focuses on the write-off part of the loss absorption process whereas I focus on conversion because it is essential to preserving a loss hierarchy that allocates losses to common equity in the first instance. If we ignore for a moment the impact of bail-in (either by conversion or write-off), the order in which losses are applied to the various sources of funding employed by a bank follows this loss hierarchy:

  • Going Concern:
    • Common Equity Tier 1 (CET1)
    • Additional Tier 1 (AT1)
  • Insolvency – Liquidation or restructuring:
    • Tier 2 (T2)
    • Senior unsecured
    • Super senior
      • Covered bonds
      • Deposits
      • Insured deposits

Under bail-in, writing off a contingent capital instrument generates an increase in common equity that accrues to the existing ordinary shareholders thereby negating the traditional loss hierarchy that requires common equity to be exhausted before more senior instruments can be required to absorb loss.

Conversion is a far better way to effect loss absorption because ordinary shareholders still bear the brunt of any loss, albeit indirectly via the dilution of their shareholding (and associated share price losses). In theory, conversion shields the AT1 investors from loss absorption because they receive common equity equivalent in value to the book value of their claim on the issuer. In practice, it is less clear that the AT1 investors will be able to sell the shares received at the conversion price or better but they are still better off than if they had simply seen the value of their investment written-off. If you are interested in digging deeper, this post looks at how loss absorption works under bail-in.

The RBNZ does recognise this dynamic but still chose to reject these advantages so it is time to look at their concerns.

RBNZ concerns with contingent capital

The RBNZ identified six concerns to justify its in principle decision to exclude the use of contingent capital instruments in the NZ capital adequacy framework.

  1. Possible under-estimation of the tax effects of contingent debt
  2. Reliance on parent entities as purchasers of AT1 contingent debt
  3. Not suitable for retail investors
  4. Banks structured as mutual societies cannot offer contingent debt that includes conversion into common equity
  5. Potential for regulatory arbitrage arising from the tension between tax and capital regulation
  6. Difficulties with exercising regulatory oversight of contingent debt

I don’t imagine the RBNZ is much concerned with my opinion but I don’t find the first three concerns to be compelling. I set out my reasons later in the post but will focus for the moment on three issues that I think do bear deeper consideration. You do not necessarily have to agree with the RBNZ assessment, or the weight they assign to them, but I believe these concerns must be addressed if we are to make the case for contingent debt.

Stronger arguments against contingent debt

1) Contingent debt gives the larger, listed banks a competitive advantage over mutual societies that are unable to issue ordinary shares

The RBNZ notes that all New Zealand banks are able to issue a version of contingent debt that qualifies as capital, but that some types of banks may have access to a broader – and cheaper – range of capital opportunities than others. The current definition of capital is thus in part responsible for a somewhat uneven playing field.

The primary concern seems to be banks structured as mutual societies which are unable to issue ordinary shares. They cannot offer contingent debt that includes conversion and must rely on the relatively more expensive option of writing-off of the debt to effect loss absorption.

I think this is a reasonable concern but I also believe there may be ways to deal with it. One option is for these banks to issue Mutual Equity Interests as has been proposed in Australia. Another option (also based on an Australian proposal) is that the increased requirements for loss absorbing capital be confined to the banks which cannot credibly be allowed to fail or be resolved in any other way. I recognise that this option benefits from the existence of deposit insurance which NZ has thus far rejected.

I need to do bit more research on this topic so I plan to revisit the way we deal with small banks, and mutuals in particular, in a future post.

2) Economic welfare losses due to regulatory arbitrage opportunities in the context of contingent debt

The tax treatment of payments to security holders is one of the basic tests for determining if the security is debt or equity but contingent debt instruments don’t fall neatly into either box. The conversion terms tied to PONV triggers make the instruments equity like when the issuer is under financial stress while the contractual nature of the payments to security holders makes them appear more debt like under normal operating conditions.

I can see a valid prudential concern but only to the extent the debt like features the tax authority relied on in making its determination regarding tax-deductibility somehow undermined the ability of the instrument to absorb loss when required.

There have been instances where securities have been mis-sold to unsophisticated investors (the Monte dei Paschi di Sienna example cited by the RBNZ is a case in point) but it is less obvious that retail investment by itself is sufficient cause to rule out this form of capital.

The only real difference I see over conventional forms of debt is the line where their equity like features come into play. Conventional debt is only ever at risk of loss absorption in the event of bankruptcy where its seniority in the loss hierarchy will determine the extent to which the debt is repaid in full. These new forms of bank capital bring forward the point at which a bank balance sheet can be restructured to address the risk that the restructuring undermines confidence in the bank. The economics of the restructuring are analogous so long as losses are allocated by conversion rather than by write-off alone.

3) Difficulties experienced with the regulatory oversight of contingent debt

Possibly their core concern is that overseeing instrument compliance is a complex and resource-intensive process that the RBNZ believes does not fit well with its regulatory model that emphasises self-discipline and market discipline. The RBNZ highlights two concerns in particular.

  • Firstly the RBNZ has chosen to respond to the challenge of vetting these instruments by instituting a “non-objection process” that places the onus on issuers to confirm that their instruments comply with the capital adequacy requirements.
  • Secondly, notwithstanding the non objection process, the added complexity of the instruments relative to common equity, still requires significant call on prudential resources.

This I think, is the strongest objection the RBNZ raises against contingent debt. Contingent debt securities are clearly more complex than common equity so the RBNZ quite reasonably argues that they need to bring something extra to the table to justify the time, effort and risk associated with them. There is virtually no justification for them if they do, as the RBNZ asserts, work against the principles of self and market discipline that underpin its regulatory philosophy.

Three not so compelling reasons for restricting the use of contingent capital instruments (“in my humble opinion’)

1) Possible under-estimation of the tax effects of contingent debt

The first concern relates to the RBNZ requirement that banks must acknowledge any potential tax implications arising from contingent debt and reflect these potential “tax offsets” in the reported value of capital. Banks are required to obtain a binding ruling from the NZ tax authority (or voluntarily take a tax ”haircut”). The RBNZ acknowledges that a binding ruling can provide comfort that tax is fully accounted for under prudential requirements, but quite reasonably argues that this will only be the case if the ruling that is sought is appropriately specified so as to capture all relevant circumstances.

The RBNZ’s specific concern seems to be what happens when no shares are issued in the event of the contingent loss absorption feature being triggered and hence no consideration is paid to investors in exchange for writing off their debt claim. The bank has made a gain that in principle would create a tax lability but it also seems reasonable to assume that the write off could only occur if the bank was incurring material losses. It follows then that the contingent tax liability created by the write off is highly likely to be set off against the tax losses such that there is no tax to pay.

I am not a tax expert so I may well be missing something but I can’t see a practical risk here. Even in the seemingly unlikely event that there is a tax payment, the money represents a windfall gain for the public purse. That said, I recognise that the reader must still accept my argument regarding the value of having the conversion option to consider it worth dealing with the added complexity.

2) A reliance on parent entities as purchasers of AT1 contingent debt

I and the RBNZ both agree that one of the key planks in the case for accepting contingent debt as bank capital is the beneficial impact on bank risk taking generated by the risk of dilution but the RBNZ argues this beneficial impact is less than it could be when the instrument is issued by a NZ subsidiary to its publicly listed parent.

I may be missing something here but the parent is exposed to dilution if the Non-Viability or Going Concern triggers are hit so I can’t see how that reduces the incentive to control risk unless the suggestion is that NZ management will somehow have the freedom to pursue risky business strategies with no input from their ultimate owners.

3) Retail investors have acquired contingent debt

The RBNZ cites some statistical evidence that suggests that, in contrast to the experience overseas, there appears to be limited uptake by wholesale investors of contingent debt issued by the big four banks. This prompts them to question whether the terms being offered on instruments issued outside the parent group are not sufficiently attractive for sophisticated investors. This concern seems to be predicated on the view that retail will always be the least sophisticated investors so banks will seek to take advantage of their relative lack of knowledge.

It is arguably true that retail investors will tend be less sophisticated than wholesale investors but that should not in itself lead to the conclusion that any issue targeted at retail is a cynical attempt at exploitation or that retail might legitimately value something differently to the way other investors do. The extent that the structures issued by the Australian parents have thus far concentrated on retail, for example, might equally be explained by the payment of franking credit that was more highly valued by the retail segment. Offshore institutions might also have been negative on the Australian market therefore pushing Australian banks to focus their efforts in the domestic market.

I retain an open mind on this question and need to dig a bit deeper but I don’t see how the fact that retail investment dominates the demand for these structures at a point in time can be construed to be proof that they are being mis-sold.

The RBNZ’s answer ultimately lies in their regulatory philosophy

The reason that the RBNZ rejects the use of these forms of supplementary capital ultimately appears to lie in its regulatory philosophy which is based on the following principles

  • Self discipline on the part of the financial institutions they supervise
  • Market discipline
  • Deliberately conservative
  • Simplicity

The RBNZ also acknowledges the value of adopting BCBS consistent standards but this is not a guiding principle. It reserves the right to adapt them to local needs and, in particular, to be more conservative. It should also be noted that the RBNZ has quite deliberately rejected adopting deposit insurance on the grounds (as I understand it) that this encourages moral hazard. They take this a step further by foregoing any depositor preference in the loss hierarchy and by a unique policy of Open Bank Resolution (OBR) under which deposits are explicitly included in the liabilities which can be written down in need to assist in the recapitalisation of an insolvent bank.

In theory, the RBNZ might have embraced contingent convertible instruments on the basis of their consistency with the principles of self and market discipline. The threat of dilution via conversion of the instrument into common equity creates powerful incentives not just for management to limit excessive risk taking but also for the investors to exert market discipline where they perceive that management is not exercising self-discipline.

In practice, the RBNZ seems to have discounted this benefit on the grounds that that there is too much risk, either by design or by some operational failure, that these instruments might not convert to common equity. They also seem quite concerned with structures that eschew conversion (i.e. loss absorption effected by write-off alone) but they could have just excluded these instruments rather than a blanket ban. Having largely discounted or disregarded the potential benefit, the principles of deliberate conservatism and simplicity dictate their proposed policy position, common equity rules.

Summing up

This post only scratches the surface of this topic. My key point is that contingent convertible capital instruments potentially add something useful to the bank capital management toolkit compared to relying entirely on common equity. The RBNZ acknowledge the potential upside but ultimately argue that the concerns they identify outweigh the potential benefits. I have reviewed their six concerns in this post but need to do a bit more work to gain comfort that I am not missing something and that my belief in the value of bail-in based capital instruments is justified.

Tony

How much capital is enough? – The NZ perspective

The RBNZ has delivered the 4th instalment in a Capital Review process that was initiated in March 2017 and has a way to run yet. The latest consultation paper addresses the question “How much capital is enough?”.  The banking industry has until 29 March 2019 to respond with their views but the RBNZ proposed answer is:

  • A Tier 1 capital requirement of 16% of RWA for systemically important banks and 15% of RWA for all other banks
  • The Tier 1 minimum requirement to remain unchanged at 6% (with AT1 capital continuing to be eligible to contribute a maximum of 1.5 percentage points)
  • The proposed increased capital requirement to be implemented via an overall prudential capital buffer of 9-10% of RWA comprised entirely of CET1 capital;
    • Capital Conservation Buffer 7.5% (currently 2.5%)
    • D-SIB Buffer 1.0% (no change)
    • Counter-cyclical buffer 1.5% (currently 0%)

The increase in the capital ratio requirement is proposed to be supplemented with a series of initiatives that will increase the RWA of IRB banks:

  • The RBNZ proposes to 1) remove the option to apply IRB RW to sovereign and bank exposures,  2) increase the IRB scalar (from 1.06 to 1.20) and 3) to introduce an output floor set at 85% of the Standardised RWA on an aggregate portfolio basis
  • As at March 2018, RWA’s produced by the IRB approach averaged 76% of the Standardised Approach and the RBNZ estimate that the overall impact will be to increase the aggregate RWA to 90% of the outcome generated by the Standardised approach (i.e. the IRB changes, not the output floor, drive the increase in RWA)
  • Aggregate RWA across the four IRB banks therefore increases by approximately 16%, or $39bn, compared to March 2018 but the exact impact will depend on how IRB banks respond to the higher capital requirements

The RBNZ has also posed the question whether a Tier 2 capital requirement continues to be relevant given the substantial increase in Tier 1 capital.

Some preliminary thoughts …

There is a lot to unpack in this paper so this post will only scratch the surface of the issues it raises …

  • The overall number that the RBNZ proposes (16%) is not surprising.It looks to be at the lower end of what other prudential regulators are proposing in nominal terms
  • But is in the same ball park once you allow for the substantial increase in IRB RWA and the fact that it is pretty much entirely CET1 capital
  • What is really interesting is the fundamentally different approach that the RBNZ has adopted to Tier 2 capital and bail-in versus what APRA (and arguably the rest of the world) has adopted
    • The RBNZ proposal that the increased capital requirement take the form of CET1 capital reflects its belief that “contingent convertible instruments” should be excluded from what counts as capital
    • Exactly why the RBNZ has adopted this position is a complex post in itself (their paper on the topic can be found here) but the short version (as I understand it) is that they think bail-in capital instruments triggered by non-viability are too complex and probably won’t work anyway.
    • Their suggestion that Tier 2 probably does not have a role in the capital structure they have proposed is logical if you accept their premise that Point of Non-Viability (PONV) triggers and bail-in do not work.
  • The RBNZ highlight a significantly enhanced role for prudential capital buffersI am generally in favour of bigger, more dynamic, capital buffers rather than higher fixed minimum requirements and I have argued previously in favour of the base rate for the counter-cyclical being a positive value (the RBNZ propose 1.5%)
    • But the overall size of the total CET1 capital buffer requirement requires some more considered thought about 1) the role of bail-in  structures and PONV triggers in the capital regulation toolkit (as noted above) and 2) whether the impacts of the higher common equity requirement will be as benign as the RBNZ analysis suggests
  • I am also not sure that the indicative capital conservation responses they have outlined (i.e. discretionary distributions limited to 60% of net earnings in the first 250bp of the buffer, falling to 30% in the next 250bp and no distributions thereafter) make sense in practice.
    • This is because I doubt there will be any net earnings to distribute if losses are sufficient to reduce CET1 capital by 250bp so the increasing capital conservation requirement is irrelevant.
  • Last, but possibly most importantly, we need to consider the impact on the Australian parents of the NZ D-SIB banks and how APRA responds. The increase in CET1 capital proposed for the NZ subsidiaries implies that, for any given amount of CET1 capital held by the Level 2 Banking Group, the increased strength of the NZ subsidiaries will be achieved at the expense of the Australian banking entities
    • Note however that the impact of the higher capital requirement in NZ will tend to be masked by the technicalities of how bank capital ratios are calculated.
      • It probably won’t impact the Level 2 capital ratios at all since these are a consolidated view of the combined banking group operations of the Group as a whole
      • The Level 1 capital ratios for the Australian banks also treat investments in bank subsidiaries relatively generously (capital invested in unlisted subsidiaries is treated as a 400% risk weighted asset rather than a capital deduction).

Conclusion

Overall, I believe that the RBNZ is well within its rights to expect the banks it supervises to maintain a total level of loss absorbing capital of 16% or more. The enhanced role for capital buffers is also a welcome move.

The issue is whether relying almost entirely on CET1 capital is the right way to achieve this objective. This is however an issue that has been debated for many decades with no clear resolution. It will take some time to fully unpack the RBNZ argument and figure out how best to articulate why I disagree. In the interim, any feedback on the issues I have outlined above would be most welcome.

Tony

Loss absorption under bail-in

I recently did a post on a Discussion Paper setting out how APRA proposes to increase the Loss Absorption Capital (LAC) of Australian authorised deposit-taking institutions (ADIs). I came down on the side of this being a desirable (arguably necessary) enhancement of the Australian financial system but noted that the devil was in the detail. One of the issues discussed was the potential impact of the proposal on the statutory and contractual loss hierarchy that defines the sequence in which losses are absorbed by the capital of the bank in the first instance, and by more senior sources of funding in need.  

This post attempts to dig a bit deeper into this question to better understand how losses would be assigned under a bail-in scenario. It is a pretty technical point and possibly of limited interest but I wanted to make sure I had a good handle on how loss absorption plays out in the future. Read on or stop here.

Key points

  • The bail-in of selected, pre-positioned liabilities modifies the traditional loss hierarchy that applies in a liquidation scenario 
    • As a general rule, the absorption of losses is accelerated across all tiers of LAC
    • CET1 investors bear the loss via the dilution of their shareholdings as AT1 and Tier 2 are converted to common equity
    • AT1 investors risk not receiving distributions but otherwise the loss hierarchy between them and T2 investors seems to collapse once their holdings are converted into CET1
    • The only potential advantage to Tier 2 in these scenarios is that these instruments may only face partial conversion but how beneficial depends on the extent to which conversion to common equity offers a better chance to liquidate their holding versus selling the Tier 2 instrument itself into what is likely to be a very illiquid market
  • This has been increasingly true since APRA introduced Point of Non-Viability (PONV) conversion triggers in 2013, and the instruments without this contractual feature progressively matured, but the proposed expansion of the pool of LAC takes us further down this path:
    • partly by virtue of making it easier for APRA to restructure bank capital structures without recourse to taxpayer support (i.e. the odds of bail-in being used in a future crisis are increased if the tool itself is more effective); and
    • partly by increasing the quantum of CET1 dilution that is the mechanism by which losses are allocated to the various tiers of LAC
  • Investors in the various capital tiers will obviously adjust the return they require for the risks they are asked to bear but we should ensure we all have a clear and consistent understanding of how the loss hierarchy is modified, and whether the resulting loss hierarchy is desirable (or indeed equitable)
  • The answer to this question turns in part on whether the outcomes for AT1 and T2 investors are better or worse than the market value they could achieve if they sold their investments prior to bail-in 

Loss Hierarchy – the simple version

Prudential Standard APS 111 (Capital Adequacy: Measurement of Capital) defines the order of seniority amongst the three tiers of prudential capital:

  • CET1 Capital “… rank behind the claims of depositors and other more senior creditors in the event of a winding up of the issuer ” (Para 19 (d))
  • AT1 Capital “… rank behind the claims of depositors and other more senior creditors in the event of a winding up of the issuer” (Para 28 (c))
  • Tier 2 Capital “represents, prior to any conversion to Common Equity Tier 1 … the most subordinated claim in liquidation of the issuer after Common Equity Tier 1 Capital instruments and Additional Tier 1 Capital instruments (Attachment H, Para 1 (b))

APS 111 (Attachment F, Para 10) also explicitly allows AT1 instruments to 1) differentiate as to whether the instrument is required to convert or be written-off in the first instance, and 2) provide for a ranking under which individual AT1 instruments will be converted or written-off. The guidance on Tier 2 is less explicit on this point but there does not seem to be any fundamental reason why a bank could not introduce a similar ranking within the overall level of subordination. I am not aware of any issuer using this feature for either AT1 or T2.

If we ignore for a moment the impact of bail-in (either by conversion or write-off), the order in which losses are applied to the various sources of funding employed by a company follows this loss hierarchy:

  • Going Concern:
    • Common Equity Tier 1 (CET1)
    • Additional Tier 1 (AT1)
  • Insolvency – Liquidation or restructuring:
    • Tier 2 (T2)
    • Senior unsecured
    • Super senior
      • Covered bonds
      • Deposits
      • Insured deposits

CET1 is clearly on the front line of loss absorption (a perpetual commitment of funding with any returns subject to the issuer having profits to distribute and the Capital Conservation Ratio (CCR) not being a constraint). AT1 is subject to similar restrictions, though its relative seniority does offer some protection regarding the payment of regular distributions.

Traditionally, the claims the other forms of funding have on the issuer are only at risk in the event of the liquidation or restructuring of the company but bail-in modifies this traditional loss hierarchy.

What happens to the loss hierarchy under bail in?

First up, let’s define bail-in …

A bail-in is the rescue of a financial institution that is on the brink of failure whereby creditors and depositors take a loss on their holdings. A bail-in is the opposite of a bailout, which involves the rescue of a financial institution by external parties, typically governments that use taxpayers money.” (Investopedia)

Investopedia’s definition above is useful, albeit somewhat generic. Never say never, but the loss hierarchy employed in Australia, combined with the fact that there are substantial layers of more junior creditors for big banks in particular, means that most Australian depositors (even the ones that do not have the benefit of deposit insurance) are pretty well insulated from bail-in risk. Not everyone would share my sanguine view on this question (i.e. the limited extent to which deposits might be bailed in) and some countries (NZ for example) quite explicitly choose to forego deposit insurance and move deposits up the loss hierarchy by ranking them equally with senior unsecured creditors.

The main point of bail-in is that existing funding is used to recapitalise the bank, as opposed to relying on an injection of new capital from outside which may or may not be forthcoming. It follows that pre-positioning sufficient layers of loss absorption, and making sure that investors understand what they have signed up for, is critical.

AT1 has always been exposed to the risk of its distributions being cut. This sounds good in theory for loss absorption but the size of these potential capital outflows is relatively immaterial in any real stress scenario. It could be argued that every dollar helps but my view is that the complexity and uncertainty introduced by making these distributions subject to the Capital Conservation Ratio (CCR) outweigh any contribution they might make to recapitalising the bank. The people who best understand this point are those who have had to calculate the CCR in a stress scenario (you have to get into the detail to understand it). The CCR issue could be addressed by simplifying the way it is calculated and I would argue that simplicity is always a desirable feature of any calculation that has to be employed under conditions of stress and uncertainty. The main point however is that it does very little to help recapitalise the bank because the heavy lifting in any really severe stress scenario depends on the capacity to convert a pool of pre-positioned, contingent capital into CET1.

APRA has had explicit power to bail-in AT1 and T2 since the January 2013 version of APS 111 introduced Point of Non-Viability (PONV) conversion triggers – these enhanced powers do a few things:

  • The impact of losses is brought forward relative what would apply in a conventional liquidation or restructuring process
  • For CET1 investors, this accelerated impact is delivered via the dilution of their shareholdings (and associated share price losses)
  • In theory, conversion shields the AT1 investors from loss absorption because they receive common equity equivalent in value to the book value of their claim on the issuer
  • In practice, it is less clear that the AT1 investors will be able to sell the shares at the conversion price or better, especially if market liquidity is adversely impacted by the events that called the viability of the issuer into question
  • The conversion challenge will be even greater to the extent that T2 investors are also bailed-in and seek to sell the shares they receive

Tier 2 will only be bailed-in after AT1 bail-in has been exhausted, as would be expected given its seniority in the loss hierarchy, but it is hard to see a bail-in scenario playing out where the conversion of AT1 alone is sufficient to restore the viability of the bank. AT1 is likely to represent not much more than the 1.5 percentage points of RWA required to meet minimum requirements but any crisis sufficient to threaten the viability of a bank is likely to require a much larger recapitalisation so full or partial conversion of T2 should be expected.

Partial conversion 

Attachment J – Para 6 provides that “Conversion or write-off need only occur to the extent necessary to enable APRA to conclude that the ADI is viable without further conversion or write-off”. Para 8 of the same attachment also specifies that “An ADI may provide for Additional Tier 1 Capital instruments to be converted or written off prior to any conversion or write-off of Tier 2 Capital instruments”.

This makes it reasonably clear that APRA will not automatically require all AT1 and Tier 2 to be converted or written-off but the basis on which partial conversion would be applied is not covered in the discussion paper. A pro-rata approach (i.e. work out how much of the aggregate Tier 2 is required to be converted and then apply this ratio to each  individual instrument) seems the simplest option and least open to legal challenge but it may be worth considering alternatives.

Converting the Tier 2 instruments closest to maturity in particular seems to offer some advantages over the pro rata approach

  • It generates more CET1 capital than the Tier 2 foregone (because the Tier 2 capital value of an instrument is amortised in its final 5 years to maturity whereas the CET1 capital created by bail-in is the full face value off the instrument)
  • It defers the need to replace maturing Tier 2 capital and maximises the residual pool of LAC post bail-in.

What is the reason for the 20% floor that APS 111 imposes on the conversion price?

The transition to a bail-in regime may be an opportune time to revisit the rationale for placing a floor on the conversion price used to convert AT1 and Tier 2 into common equity. Attachments E and F contain an identically worded paragraph 8 that requires that the share price used to calculate the shares received on conversion cannot be less than 20% of the ordinary share price at the the time the LAC instrument was issued. This floor arguably requires the share price to fall a long way before it has any effect but it is not clear what purpose is served by placing any limit on the extent to which common equity shareholders might see their holdings diluted in a non-viability scenario.

Bail-in via write-off of AT1 or T2

I am concentrating on bail-in via conversion because that seems to be the default loss absorption contemplated by APS 111 and the one that is most consistent with the traditional loss hierarchy. LAC instruments can be designed with write-off as the primary loss absorption mechanism but it is not clear that any issuer would ever choose to go down that path as it would likely be more expensive versus bail-in via conversion. The write-off option seems to have been included as a failsafe in the event that conversion is not possible for whatever reason.

Conclusion

The loss absorption hierarchy under a bail-in based capital regime is a bit more complicated than the simple, progressive three tier hierarchy that would apply in a traditional liquidation scenario. I believe however that this added complexity is justified both by the enhanced level of financial safety and by the extent to which it addresses the advantage big banks have previously enjoyed by virtue of being Too Big To Fail.

The main concern is that AT1 and Tier 2 investors who underwrite the pre-positioning of this contingent source of new CET1 capital properly understand the risks. I must confess that I had to think it through and remain open to the possibility that I have missed something … if so tell me what I am missing.

Tony

 

Mortgage risk weights – fact check

It is frequently asserted that the major Australian banks have been “gifted” a substantially lower mortgage risk weight than the smaller banks. To be precise, the assertion is that the major banks are only required to hold capital based on a 25% risk weight versus 39% for smaller banks.

If you are not familiar with the arcane detail of bank capital adequacy, then you could be forgiven for concluding that this differential (small banks apparently required to hold 56% more capital for the same risk) is outrageous and unfair. While the risk weights for big banks are certainly lower on average than those required of small banks, I believe the difference in capital requirements is not as large as the simple comparison of risk weights suggests.

Bank capital requirements involve more than risk weights

To understand why this comparison of risk weights is misleading, it will be helpful to start with a quick primer on bank capital requirements. The topic can be hugely complex but, reduced to its essence, there are three elements that drive the amount of capital a bank holds:

  1. The risk weights applied to its assets
  2. The target capital ratio applied to those risk weighted assets
  3. Any capital deductions required when calculating the capital ratio

Problem 1 – Capital adequacy ratios differ

The comparison of capital requirements based on risk weights implicitly assumes that the regulator applies the same capital ratio requirement to all banks, but this is not the case. Big banks are targeting CET1 ratios based on the 10.5% Unquestionably Strong benchmark set by APRA while there is a greater range of practice amongst the smaller banks. Bendigo and Suncorp appear to be targeting a CET1 ratio in the range of 8.5 to 9.0% while the smaller of the small banks appear to be targeting CET1 ratios materially higher (say 15% or more).

If we confine the comparison to the alleged disadvantage suffered by Bendigo and Suncorp, then the higher risk weights they are required to apply to residential mortgages is substantially offset by the lower CET1 target ratios that they target (the 56% difference in capital required shrinks to something in the order of 30% if you adjust for the difference in target CET1 ratios).

Broadening the comparison to the smaller banks gets even more interesting. At face value the much higher CET1 ratios they appear to target suggest that they are doubly penalised in the required capital comparison but you have to ask why are they targeting such high CET1 ratios. One possible explanation is that the small less diversified mortgage exposures are in fact more risky than the more diversified exposures maintained by their larger competitors.

Problem 2 – You have to include capital deductions

This is quite technical I recognise but, in addition to the capital tied to the risk weight, the big banks are also required to hold capital for a capital deduction linked to the difference between their loan loss provisions and a regulatory capital value called “Regulatory Expected Loss”. This capital deduction increases the effective risk weight. The exact amount varies from bank to bank but I believe it increases the effective capital requirement by 10-12% (I.e. an effective RW closer to 28%). My understanding is that small banks are not required to make the same capital deduction.

Problem 3 – The Standardised risk weights for residential mortgages seem set to change

A complete discussion of the RW difference should also take account of the fact that APRA has proposed to introduce lower RW Categories for the smaller banks such their average RW may be lower than 39% in the future. I don’t know what the average RW for small banks would be under these new RW but that is a question you could put to the banks who use the 39% figure without acknowledging this fact.

Problem 4 – The risk of a mortgage depends on the portfolio not the individual loan

The statement that a loan is the same risk irrespective of whether it is written by a big bank or small bank sounds intuitively logical but is not correct. The risk of a loan can only be understood when it is considered as part of the portfolio the bank holds. Small banks will typically be less diversified than a big bank.

Problem 5 – What about the capital required for Interest Rate Risk in the Banking Book (IRRBB)?

I don’t have sufficient data to assess how significant this is, but intuitively I would expect that the capital that the major banks are required to hold for IRRBB will further narrow the effective difference between the risk weights applied to residential mortgages.

Summing up

My aim in this post was not to defend the big banks but rather to try to contribute some of the knowledge I have acquired working in this area to what I think is an important but misunderstood question. In the interests of full disclosure, I have worked for one of the large Australian banks and may continue to do work for them in the future.

On a pure risk basis, it seems to me that the loan portfolio of a large bank will tend to be more diversified, and hence lower risk, than that of a smaller bank. It is not a “gift” for risk weights to reflect this.

There is a legitimate debate to be had regarding whether small banks should be given (gifted?) an advantage that helps them compete against the big banks. That debate however should start with a proper understanding of the facts about how much advantage the large banks really have and the extent to which their lower risk weights reflect lower risk.

If you disagree tell me what I am missing …

Capital adequacy – an option to add transparency and flexibility into the “Unquestionably Strong” mix

Two of my earlier posts (here and here) discussed the potential to improve the capital adequacy framework by revisiting the calibration and operation of regulatory capital buffers. Some of the issues discussed in those posts are touched on in a discussion paper APRA has released titled “Improving the transparency, comparability and flexibility of the ADI capital framework“.

APRA is open to alternatives but sets out two options for discussion

In APRA’s words, the DP outlines

“… options to modify the ADI capital framework to improve transparency and comparability of reported capital ratios. The main conceptual approaches APRA is considering and seeking feedback on are:

  • developing more consistent disclosures without modifying the underlying capital framework; and

  • modifying the capital framework by adjusting the methodology for calculating capital ratios.”

The First Approach– “Consistent disclosure” – seems to be a beefed up version of the status quo in which APRA gets more directly involved in the comparability process by adding its imprimatur to the internationally harmonised ratios some Australian banks currently choose to disclose as an additional informal measure of capital strength.

“Under this approach, ADIs would continue to determine regulatory capital ratios using APRA’s definitions of capital and RWA. However, APRA would also specify a methodology for ADIs to determine certain adjustments to capital and RWA that could be used for disclosure (Pillar 3) purposes. As noted above, the methodology would focus on aspects of relative conservatism that are material in size and able to be calculated simply and objectively.”

APRA argues that “The supplementary disclosure would allow all stakeholders to better assess the capital strength of an ADI on a more comparable basis. However, it would result in two APRA-endorsed capital ratios: an APRA regulatory capital ratio to be compared against minimum requirements, and an additional disclosure-only capital ratio for, in particular, international comparison.”

A Second Approach – “Capital ratio adjustments” would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA.

The DP explains that this “… alternative approach would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA. This would involve removing certain aspects of relative conservatism from ADIs’ capital ratio calculations and lifting minimum regulatory capital ratio requirements in tandem. This increase in regulatory capital ratio requirements could be in the form of a transparent adjustment to minimum capital ratio requirements—for the purposes of this paper, such an adjustment is termed the ‘APRA Overlay Adjustment’.”

“To maintain overall capital adequacy, the APRA Overlay Adjustment would need to be calculated such that the total dollar amount of Prudential Capital Requirement (PCR) and Capital Conservation Buffer (CCB) would be the same as that required if these measures were not adopted. In other words, the risk-based capital requirements of ADIs would be unchanged in absolute dollar terms, maintaining financial safety, but adjustments to the numerator and the denominator of the capital ratio to be more internationally comparable would increase reported capital ratios.”

APRA clarify that

“These options are not mutually exclusive, and there is potential for both approaches to be adopted and applied in different areas.”

Approach 2 has some clear advantages …

It would make the “unquestionably strong” capital ratios more directly comparable with international peers, thereby reducing the potential for the perception of this strength to be obscured or misunderstood.

“Perception” is the key word here. It matters that the strength of Australian banks is simple, transparent and evident rather than being something where the perceivers must understand a sequence of relatively obscure and complex adjustments to fully appreciate the strength of a bank’s capital. More importantly perception matters most when the system is under stress and people do not have the time, or the inclination, to look beyond the reported numbers.

The adjusted capital ratio approach also provides opportunity to increase the flexibility of the ADI capital framework in times of stress but only to the extent to which the Overlay Adjustment is applied to the capital buffer, rather than increasing the minimum capital requirements. Higher minimum requirements would do nothing to enhance flexibility and may even be a backward step.

I believe a non zero baseline for the CCyB would also enhance the flexibility of the capital framework by virtue of the fact that it improves the odds that the banks (and APRA) have a flexible buffer in place before it is needed. This opportunity for enhanced flexibility is an option under both approaches so long as the Unquestionably Strong Benchmark maintains a material surplus over the Capital Conservation Buffer as it currently does.

But also some challenges …

APRA notes that the Adjusted Capital Ratio approach:

  • May significantly increase operational complexity for ADIs by virtue of the fact that the application of the APRA Overlay Adjustment would result in variable capital ratio requirements,

• Potentially results in variable minimum capital requirements which introduces complexity in analysing capital buffers and may undermine the desired transparency, and

• Reduces the dollar value of the 5.125 per cent (of RWA) loss absorption trigger point.

Do the advantages of the Adjusted Capital Ratio approach outweigh the challenges?

The short answer, I think, is yes … albeit with some qualifications.

So far as I can see, the added complexity only enters the discussion to the extent that some of the APRA Overlay Adjustment is applied to increase the minimum capital requirement. Most, if not all, of the operational complexity is avoided if the “Overlay Adjustment” is confined to increasing the size of the capital buffer.

Conversely, the benefits of increased responsiveness (or risk sensitivity) and flexibility lie in an increased capital buffer.

It follows then that the best way to pursue this approach is for any harmonised adjustments to the reported capital ratio to be confined to a higher CCB. This begs the question whether all the Overlay Adjustment should be applied to the capital buffer. I address that question in my responses below to some of the questions APRA has posed to solicit industry feedback.

One issue not covered in the Discussion Paper in any detail is that the capital ratios under Approach 2 will be more sensitive to any changes in the numerator. This is a simple mathematical consequence of RWA being lower if more harmonised measures are adopted. I do not see this as a problem but the heightened risk sensitivity of the framework needs to be clearly understood beforehand to minimise the potential for larger changes in capital ratios to be misunderstood. A more risk sensitive capital ratio may even be an advantage. This may not be obvious but there is a body of research which suggests a more responsive, more volatile, measure of capital adequacy can be beneficial to the extent that it prompts greater risk awareness on the part of bank management and external stakeholders. Greg Ip’s book “Foolproof” offers an introduction to some of this research but a simple example illustrating the point is the way that the benefits of improved braking in modern cars is offset to some extent by people driving faster.

APRA concludes its discussion paper with some direct questions.

There are 8 questions in all but in the context of this post I will have a go at addressing 3 of them, questions 2, 7 and 8.

Question 2: If APRA were to apply a combination of Approach 1 and Approach 2, which aspects of relative conservatism are best suited to be treated under Approach 2?

If you accept the argument that the minimum capital requirement should continue to be a set value (i.e. not subject to periodic adjustment), then the aspects of relative conservatism best suited to Approach 2 are those which can reasonably be assigned to an increase in, and regular adjustment of, one or more of the capital buffers.

Running through the list of adjustments currently applied to generate the internationally harmonised capital ratios, we can distinguish three broad categories of APRA conservatism:

  1. The extra credit risk related losses a bank might expect to experience under a very severe recession or financial crisis style scenario but not necessarily a gone concern where losses extend into the tail of the loss distribution
  2. Assets whose value depends on the ADI being a going concern and consequently are less certain to be realisable if the bank is in liquidation or has otherwise reached a point of non-viability
  3. Capital deduction intended to avoid “double counting” capital invested outside the ADI include

There are very few areas of black and white in the response to this question, but the first group are the items of APRA conservatism that I think have the clearest claim to be included in the capital buffer. These reflect potential loss scenarios that are conservative but still within the domain of plausibly severe downturns in the business cycle; this would encompass the following capital ratio adjustments:

  • the 20 per cent loss given default (LGD) portfolio constraint required for residential mortgage exposures;
  • the LGD parameter for unsecured non-retail exposures;
  • credit conversion factors (CCFs) for undrawn non-retail commitments;
  • use of supervisory slotting and the scaling factor for specialised lending;
  • risk weights for other retail exposures covered by the standardised approach to credit risk; and
  • the exchange rate used to convert Euro-denominated thresholds in the Basel capital framework into Australian dollars.

The second category are assets which have a value if the bank is a going concern but cannot necessarily be relied upon in non viability scenarios; I.e.

  • deferred tax assets arising from timing differences;
  • capitalised expenses and transaction costs
  • the capital requirement applied by APRA for IRRBB (I am open to arguments that I am being too harsh on IRRBB)

The third category captures capital that is committed to risks where the bank is taking a first loss exposure including

  • investments in other financial institutions;
  • holdings of subordinated tranches of securitisations.
  • investments in commercial entities;

Another way to explore this question is to map these three categories to the traditional graphic expression of a bank loss distribution and establish whether they are expected to lie:

  • closer to the middle of the loss distribution (sometimes framed as a 1 in 25 year downturn or the kinds of losses we expect in a severe downturn)
  • Or closer to the “tail” of the loss distribution (typically expressed as a 1 in 1000 year loss in regulatory capital terms).

To be clear, I am not seeking to ascribe any level of precision to these statistical probabilities; simply to distinguish between the relative likelihood of the items of conservatism that APRA has embedded in its current measure of capital adequacy. These three items tend to be treated as equivalent under the current approach and enhanced disclosure per Approach 1 will do nothing to address this conflation of risks.

Question 7: Would increasing the size of capital buffers (either by increasing the CCB or by setting a non-zero baseline CCyB) relative to PCR appropriately balance capital strength with financial stability through the cycle?

I have advocated the benefits of a non zero baseline CCYB in previous posts. One of these posts focused on the approach adopted by the Bank of England where I identified two advantages.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable counter cyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The BOE approach still monitors a range of core indicators for the CCyB grouped into three categories.

• The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.

• The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.

• The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the BOE implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the standard BCBS approach. The BOE might still miss the turning point but it has a head start on the problem if it does.

The BOE also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

I have discussed the BOE approach at length but the Canadian supervisor has also introduced some interesting innovations in the way that it uses a capital buffer to address the systemic risk of large banks that are worth considering as part of this review.

The second reason I favour a non zero baseline is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate.

In theory, the standard capital conservation buffer (CCB) introduced under Basel III can absorb any unexpected increase in losses and allow banks the time to progressively rebuild the buffer when economic conditions improve

In practice, the upper boundary of the CCB acts as a de facto minimum requirement such that banks face strong market pressure to immediately rebuild the buffer potentially at material cost to shareholders

There are no guarantees for what happens to banking systems under stress, but a flexible buffer that is sensitive to the state of the credit cycle is I think far more fit for purpose.

It is important to note that a non zero CCYB is an option under both approaches. There is potentially enough surplus capital in the Unquestionably Strong calibration for a non-zero CCYB to be introduced without requiring banks to raise any more capital. This would be so under either of the approaches that APRA has outlined.

So a larger buffer would be desirable from the perspective of increased comparability and transparency but the advantages of a non zero CCYB could also be pursued under the Unquestionably Strong status quo or Approach 1.

Question 8: What may be some of the potential impacts if APRA increases the prescribed loss absorption trigger point above 5.125 per cent of RWA?

The rationale for increasing the PONV Trigger is that APRA believes it is important to preserve the value of the trigger in dollar terms.

I can see that it is important to have a PONV trigger well before a bank reaches the point of insolvency (I.e. where liabilities exceed assets).

It is less clear that the reduction in the dollar value of the trigger point is sufficiently material to matter.

What really matters is the amount of contingent capital available to be converted into common equity if the PONV conversion trigger is pulled.

In the absence of this source of new capital, the fact that a bank has X billion dollars more or less of book equity (according to the financial accounts) at the point of deemed non-viability is arguably irrelevant to whether it remains a going concern.

I am also pretty sure that we do not want the operational complexity associated with a PONV trigger that moves around over time as a result of seeking to compensate for the impact of the Overlay Adjustment on capital deductions and RWA.

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

Looking under the hood – The IRB formula

This post is irredeemably technical so stop here if that is not your interest. If you need to understand some of the mechanics of the formula used to calculate credit risk weighted assets under the advanced Internal Ratings Based (IRB) approach then the BCBS published a paper in 2005 which offers an explanation:

  • describing the economic foundations
  • as well as the underlying mathematical model and its input parameters.

While a lot has changed as a result of Basel III, the models underlying the calculation of Internal Rating Based Capital (IRB) requirements are still based on the core principles agreed under Basel II that are explained in this BCBS paper.

The notes in the linked page below mostly summarise the July 2005 paper with some emphasis (bolded text) and comments (in italics) that I have added. The paper is a bit technical but worth reading if you want to understand the original thinking behind the Basel II risk weights for credit risk.

I initially found the paper useful for revisiting the foundation assumptions of the IRB framework as background to considering the regulatory treatment of Expected Loss as banks transition to IFRS9. The background on how the RW was initially intended to cover both Expected and Unexpected Loss, but was revised such that capital was only required to cover Unexpected Loss, is especially useful when considering the interaction of loan loss provisioning with capital requirements.

Reading the BCBS paper has also been useful for thinking through a range of related issues including:

  • The rationale for, and impact of, prudential conservatism in setting the risk parameters used in the IRB formula
  • The cyclicality of a risk sensitive capital requirement (and potential for pro cyclicality) and what might be done to mitigate the risk of pro-cyclical impacts on the economy

If you have read this far then my summary of the BCBS paper and my comments /observations can be found here (and thank you).

I am not a credit risk model expert, so the summary of the paper and my comments must be read with that in mind. I did this to help me think through some of the issues with bank capital adequacy. Hopefully others will find the notes useful. If you see something wrong or something you disagree with then let me know.

“Between Debt and the Devil: Money, Credit and Fixing Global Finance” by Adair Turner (2015)

This book is worth reading, if only because it challenges a number of preconceptions that bankers may have about the value of what they do. The book also benefits from the fact that author was the head of the UK Financial Services Authority during the GFC and thus had a unique inside perspective from which to observe what was wrong with the system. Since leaving the FSA, Turner has reflected deeply on the relationship between money, credit and the real economy and argues that, notwithstanding the scale of change flowing from Basel III, more fundamental change is required to avoid a repeat of the cycle of financial crises.

Overview of the book’s main arguments and conclusions

Turner’s core argument is that increasing financial intensity, represented by credit growing faster than nominal GDP, is a recipe for recurring bouts of financial instability.

Turner builds his argument by first considering the conventional wisdom guiding much of bank prudential regulation prior to GFC, which he summarises as follows:

  • Increasing financial activity, innovation and “financial deepening” were beneficial forces to be encouraged
  • More compete and liquid markets were believed to ensure more efficient allocation of capital thereby fostering higher productivity
  • Financial innovations made it easier to provide credit to households and companies thereby enabling more rapid economic growth
  • More sophisticated risk measurement and control meanwhile ensured that the increased complexity of the financial system was not achieved at the expense of stability
  • New systems of originating and distributing credit, rather than holding it on bank balance sheets, were believed to disperse risks into the hands of those best placed to price and manage it

Some elements of Turner’s account of why this conventional wisdom was wrong do not add much to previous analysis of the GFC. He notes, for example, the conflation of the concepts of risk and uncertainty that weakened the risk measurement models the system relied on and concludes that risk based capital requirements should be foregone in favour of a very high leverage ratio requirement. However, in contrast to other commentators who attribute much of the blame to the moral failings of bankers, Turner argues that this is a distraction. While problems with the way that bankers are paid need to be addressed, Turner argues that the fundamental problem is that:

  • modern financial systems left to themselves inevitably create debt in excessive quantities,
  • in particular, the system tends to create debt that does not fund new capital investment but rather the purchase of already existing assets, above all real estate.

Turner argues that the expansion of debt funding the purchase or trading of existing assets drives financial booms and busts, while the debt overhang left over by the boom explains why financial recovery from a financial crisis is typically anaemic and protracted. Much of this analysis seems to be similar to ideas developed by Hyman Minsky while the slow pace of recovery in the aftermath of the GFC reflects a theme that Reinhart and Rogoff have observed in their book titled “This time is different” which analyses financial crises over many centuries.

The answer, Turner argues, is to build a less credit intensive growth model. In pursuing this goal, Turner argues that we also need to understand and respond to the implications of three underlying drivers of increasing credit intensity;

  1. the increasing importance of real estate in modern economies,
  2. increasing inequality, and
  3. global current account imbalances.

Turner covers a lot of ground, and I do not necessarily agree with everything in his book, but I do believe his analysis of what is wrong with the system is worth reading.

Let me start with an argument I do not find compelling; i.e. that risk based capital requirements are unreliable because they are based on a fundamental misunderstanding of the difference between risk (which can be measured) and uncertainty (which cannot):

  • Distinguishing between risk and uncertainty is clearly a fundamental part of understanding risk and Turner is not alone in emphasising its importance
  • I believe that means that we should treat risk based capital requirements with a healthy degree of scepticism and a clear sense of their limitations but that does not render them entirely unreliable especially when we are using them to understand relative differences in risk and to calibrate capital buffers
  • The obvious problem with non-risk based capital requirements is that they create incentives for banks to take higher risk that may eventually offset the supposed increase in soundness attached to the higher capital
  • It may be that Turner discounts this concern because he envisages a lower credit growth/intensity economy delivering less overall systemic risk or because he envisages a more active role for the public sector in what kinds of assets banks lend against; i.e. his support for higher capital may stem mostly from the fact that this reduces the capacity of private banks to generate credit growth

While advocating much higher capital, Turner does seem to part company with M&M purists by expressing doubt that equity investors will be willing to accept deleveraged returns. His reasoning is that returns to equity investments need a certain threshold return to be “equity like” while massively deleveraged ROE still contains downside risks that are unacceptable to debt investors.

Turning to the arguments which I think raise very valid concerns and deserve serious attention.

Notwithstanding my skepticism regarding a leverage ratio as the solution, the arguments he makes about the dangers of excessive credit growth resonate very strongly with what I learned during my banking career. Turner is particularly focussed on the downsides of applying excessive debt to the financing of existing assets, real estate in particular. The argument seems to be similar to (if not based on) the work of Hyman Minsky.

Turner’s description of the amount of money that banks can create as being “infinitely elastic” seems an overstatement to me (especially in the Australian context with the Net Stable Funding Ratio (NSFR) weighing on the capacity to grow the balance sheet) but the general point he is making about the way that credit fuelled demand for a relatively inelastic supply of desirable residential property tends to result in inflated property values with no real social value rings true.

What banks can do about this remains an open question given that resolving the problem with inelastic supply of property is outside their direct control but it is obviously important to understand the dynamics of the market underpinning their largest asset class and it may help them engage more constructively with public policy debates that seek to address the problem.

Turner’s analysis of the downsides of easy monetary policy (the standard response to economic instability) also rings true. He identifies the fact that lower interest rates tend to result in inflated asset values (residential property in particular given its perceived value as a safe asset) which do not address the fundamental problem of over-indebtedness and may serve to increase economic inequality. His discussion of the impact of monetary policy and easy credit on economic inequality is also interesting. The banks providing the credit in the easy money environment may not necessarily be taking undue risk and prudential supervisors have tools to ensure sound lending standards are maintained if they do believe there is a problem with asset quality. What may happen however is that the wealthier segments of society benefit the most under easy money because they have the surplus cash flow to buy property at inflated values while first homebuyers become squeezed out of the market. Again their capacity to address the problem may be limited but Turner’s analysis prompted me to reflect on what increasing economic inequality might mean for bank business models.

In addition to much higher bank capital requirements, Turner’s specific recommendations for moving towards a less credit intensive economy include:

  • Government policies related to urban development and the taxation of real estate
  • Changing tax regimes to reduce the current bias in favour of debt over equity financing (note that Australia is one of the few countries with a dividend imputation system that does reduce the bias to debt over equity)
  • Broader macro prudential powers for central banks, including the power to impose much larger countercyclical capital requirements
  • Tough constraints on the ability of the shadow banking system to create credit and money equivalents
  • Using public policy to produce different allocations of capital than would result from purely market based decisions; in particular, deliberately leaning against the market signal based bias towards real estate and instead favouring other “potentially more socially valuable forms of credit allocation”
  • Recognising that the traditional easy monetary policy response to an economic downturn (or ultra-easy in the case of a financial crisis such as the GFC) is better than doing nothing but comes at a cost of reigniting the growth in private credit that generated the initial problem, creating incentives for risky financial engineering and exacerbating economic inequality via inflating asset prices.

For those who want to dig deeper, I have gone into a bit more detail here on what Turner has to say about the following topics:

  • The way in which inefficient and irrational markets leave the financial system prone to booms and busts
  • The dangers of debt contracts sets out how certain features of these contracts increase the risk of instability and hamper the recovery
  • Too much of the wrong sort of debt describes features of the real estate market that make it different from other asset classes
  • Liberalisation, innovation and the credit cycle on steroids recaps on the philosophy that drove the deregulation of financial markets and what Turner believes to be the fundamental flaws with that approach. In particular his conclusion that the amount of credit created and its allocation is “… too important to be left to bankers…”
  • Private credit and money creation offers an outline of how bank deposits evolved to play an increasing role (the key point being that it was a process of evolution rather than overt public policy design choices)
  • Credit financed speculation discusses the ways in which credit in modern economies tends to be used to finance the purchase of existing assets, in particular real estate, and the issues that flow from this.
  • Inequality, credit and more inequality sets out some ways in which the extension of credit can contribute to increasing economic inequality
  • Capital requirements sets out why Turner believes capital requirements should be significantly increased and why capital requirements (i.e. risk weights) for some asset classes (e.g. real estate) should be be calibrated to reflect the social risk of the activity and not just private risks captured by bank risk models
  • Turner defence against the argument that his proposals are anti-markets and anti-growth.

The Countercyclical Capital Buffer

This post uses a recent BCBS working paper as a stepping off point for a broader examination of how the countercyclical capital buffer (CCyB) can help make the banking system more resilient.

This post uses a recent BCBS working paper as a stepping off point for a broader examination of how the countercyclical capital buffer (CCyB) can help make the banking system more resilient. The BCBS paper is titled “Towards a sectoral application of the countercyclical capital buffer: A literature review – March 2018” (BCBS Review) and its stated aim is to draw relevant insights from the existing literature and use these to shed light on whether a sectoral application of the CCyB would be a useful extension of the existing Basel III framework under which the CCyB is applied at an aggregate country level credit measure. The views expressed in Working Papers like this one are those of their authors and do not represent the official views of the Basel Committee but they do still offer some useful insights into what prudential supervisors are thinking about.

Key points

  1. I very much agree with the observation in the BCBS Review that the standard form of the CCyB is a blunt instrument by virtue of being tied to an aggregate measure of credit growth
  2. And that a sectoral application of the CCyB (operating in conjunction with other sector focussed macro prudential tools) would be an improvement
  3. But the CCyB strategy that has been developed by the Bank of England looks to be a much better path to pursue
  4. Firstly, because it directly addresses the problem of failing to detect/predict when the CCyB should be deployed and secondly because I believe that it results in a much more “usable” capital buffer
  5. The CCyB would be 1% if APRA adopted the Bank of England strategy (the CCyB required by APRA is currently 0%) but adopting this strategy does not necessarily require Australian banks to hold more capital at this stage of the financial cycle
  6. One option would be to align one or more elements of APRA’s approach with the internationally harmonised measure of capital adequacy and to “reinvest” the increased capital in a 1% CCyB.

First a recap on the Countercyclical Capital Buffer (aka CCyB).

The CCyB became part of the international macro prudential toolkit in 2016 and is intended to ensure that, under adverse conditions, the banking sector in aggregate has sufficient surplus capital on hand required to maintain the flow of credit in the economy without compromising its compliance with prudential requirements.

A key feature in the original BCBS design specification is that the buffer is intended to be deployed in response to high levels of aggregate credit growth (i.e high relative to the sustainable long term trend rates) which their research has identified as an indicator of heightened systemic risk. That does not preclude bank supervisors from deploying the buffer at other times as they see fit, but responding to excess credit growth has been a core part of the rationale underpinning its development.

The BCBS Review

The BCBS Review notes that the CCyB works in theory but concedes there is, as yet, virtually no empirical evidence that it will work in practice. This is not surprising given that it has only been in place for a very short period of time but still important to remember. The BCBS Review also repeatedly emphasises the point that the CCyB may help to mitigate the credit cycle but that is a potential side benefit, not the main objective. Its primary objective is to ensure that banks have sufficient surplus capital to be able to continue lending during adverse economic conditions where losses will be consuming capital.

The Review argues that the CCyB is a useful addition to the supervisor’s tool kit but is a blunt instrument that impacts all sectors of the economy indiscriminately rather than just targeting the sectors which are the source of systemic concern. It concludes that applying the CCyB at a sectoral level might be more effective for three reasons

  • more direct impact on the area of concern,
  • stronger signalling power, and
  • smaller effects on the wider economy than the Basel III CCyB.

The Review also discusses the potential to combine a sectoral CCyB with other macro prudential instruments; in particular the capacity for the two approaches to complement each other;

Quote “Generally, macroprudential instruments that operate through different channels are likely to complement each other. The literature reviewed indicates that a sectoral CCyB could indeed be a useful complement to alternative sectoral macroprudential measures, including borrower-based measures such as LTV, LTI and D(S)TI limits. To the extent that a sectoral CCyB is more effective in increasing banks’ resilience and borrower-based measures are more successful in leaning against the sectoral credit cycle, both objectives could be attained more effectively and efficiently by combining the two types of instruments. Furthermore, there is some evidence that suggests that a sectoral CCyB could have important signalling effects and may therefore act as a substitute for borrower-based measures.”

A Sectoral CCyB makes sense

Notwithstanding repeated emphasis that the main point of the CCyB is to ensure banks can and will continue to support credit growth under adverse conditions, the Review notes that there is not much, if any, hard empirical evidence on how effective a release of the CCyB might be in achieving this. The policy instrument’s place in the macro prudential tool kit seems to depend on the intuition that it should help, backed by some modelling that demonstrates how it would work and a pinch of hope. The details of the modelling are not covered in the Review but I am guessing it adopts a “homo economicus” approach in which the agents act rationally. The relatively thin conceptual foundations underpinning the BCBS version of the CCyB are worth keeping in mind.

The idea of applying the CCyB at a sectoral level seems to make sense. The more targeted approach advocated in the Review should in theory allow regulators to respond to sectoral areas of concern more quickly and precisely than would be the case when the activation trigger is tied to aggregate credit growth. That said, I think the narrow focus of the Review (i.e. should we substitute a sectoral CCyB for the current approach) means that it misses the broader question of how the CCyB might be improved. One alternative approach that I believe has a lot of promise is the CCyB strategy adopted by the Bank of England’s Financial Policy Committee (FPC).

The Bank of England Approach to the CCyB (is better)

The FPC published a policy statement in April 2016 explaining that its approach to setting the countercyclical capital buffer is based on five core principles. Many of these are pretty much the same as the standard BCBS policy rationale discussed above but the distinguishing feature is that it “… intends to set the CCyB above zero before the level of risk becomes elevated. In particular, it expects to set a CCyB in the region of 1% when risks are judged to be neither subdued nor elevated.”

This contrasts with the generic CCyB, as originally designed by the BCBS, which sets the default position of the buffer at 0% and only increases it in response to evidence that aggregate credit growth is excessive. This might seem like a small point but I think it is a material improvement on the BCBS’s original concept for two reasons.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable countercyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The FPC still monitors a range of core indicators for the CCyB grouped into three categories.

  • The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.
  • The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.
  • The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the FPC implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the BCBS approach. The FPC might still miss the turning point but it has a head start on the problem if it does.

The FPC also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

The second reason  I favour the FPC strategy is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate. I must admit I have struggled to clearly articulate why this would be so but I think the answer lies partly in the way that the FPC links the CCyB to a four stage model that can be interpreted as a stylised description of the business cycle. The attraction for me in the FPC’s four stage model is that it offers a coherent narrative that helps all the stakeholders understand what is happening, why it is happening, what will happen next and when it will happen.

The BCBS Review talks about the importance of communication and the FPC strategy offers a good model of how the communication strategy can be anchored to a coherent and intuitive narrative that reflects the essentially cyclical nature of the banking industry. The four stages are summarised below together with some extracts setting out the FPC rationale.

Stage 1: The post-crisis repair phase in which risks are subdued – the FPC would expect to set a CCyB rate of 0%

FPC rationale: “Risks facing the financial system will normally be subdued in a post-crisis repair and recovery phase when the financial system and borrowers are repairing balance sheets. As such, balance sheets are not overextended. Asset and property prices tend to be low relative to assessed equilibrium levels. Credit supply is generally tight and the risk appetite of borrowers and lenders tends to be low. The probability of banks coming under renewed stress is lower than average.”

Stage 2: Risks in the financial system re-emerge but are not elevated – the FPC intends to set a positive CCyB rate in the region of 1% after the economy moves into this phase.

FPC rationale: ‘In this risk environment, borrowers will not tend to be unusually extended or fragile, asset prices are unlikely to show consistent signs of over, or under, valuation, and measures of risk appetite are likely to be in line with historical averages”. As such, it could be argued that no buffer is required but the FPC view is that a pre-emptive strategy is more “robust to the inherent uncertainty associated with measuring risks to financial stability”. It also allows subsequent adjustments to be more graduated than would be possible if the CCyB was zero.

Stage 3: Risks in the financial system become elevated: stressed conditions become more likely – the FPC would expect to increase the CCyB rate beyond the region of 1%. There is no upper bound on the rate that can be set by the FPC.

FPC rationale: “As risks in the financial system become elevated, borrowers are likely to be stretching their ability to repay loans, underwriting standards will generally be lax, and asset prices and risk appetite tend to be high. Often risks are assumed by investors to be low at the very point they are actually high. The distribution of risks to banks’ capital at this stage of the financial cycle might have a ‘fatter tail’ [and] stressed outcomes are more likely.”

Stage 4: Risks in the financial system crystallise – the FPC may cut the CCyB rate, including where appropriate to 0%.

FPC rationale: “Reducing the CCyB rate pre-emptively before losses have crystallised may reduce banks’ perceived need to hoard capital and restrict lending, with consequent negative impacts for the real economy. And if losses have crystallised, reducing the CCyB allows banks to recognise those losses without having to restrict lending to meet capital requirements. This will help to ensure that capital accumulated when risks were building up can be used, thus enhancing the ability of the banking system to continue to support the economy in times of stress.”

The March 2018 meeting of the FPC advised that the CCyB applying to UK exposures would remain unchanged at the 1% default level reflecting its judgement that the UK banking system was operating under Stage 2 conditions.

Calibrating the size of the CCyB

The FPC’s approach to calibrating the size of the CCyB also offers some interesting insights. The FPC’s initial (April 2016) policy statement explained that a “CCyB rate in the region of 1%, combined with other elements of the capital framework, provides UK banks with sufficient capital to withstand a severe stress. Given current balance sheets, the FPC judges that, at this level of the CCyB, banks would have sufficient loss-absorbing capacity to weather a macroeconomic downturn of greater magnitude than those observed on average in post-war recessions in the United Kingdom — although such estimates are inherently uncertain.”

The first point to note is that the FPC has chosen to anchor their 1% default setting to a severity greater than the typical post war UK recession but not necessarily a GFC style event. There is a school of thought that maintains that more capital is always better but the FPC seems to be charting a different course. This is a subtle area in bank capital management but I like the the FPC’s implied defence of subtlety.

What is sometimes lost in the quest for a failure proof banking system is a recognition of the potential for unintended consequence. All other things being equal, more capital makes a bank less at risk of insolvency but all other things are almost never equal in the real world. Banks come under pressure to find ways to offset the ROE dilution associated with more capital. I know that theory says that a bank’s cost of equity should decline as a result of holding more capital so there is no need to offset the dilution but I disagree (see this post for the first in a proposed series where I have started to set out my reasons why). Attempts to offset ROE dilution also have a tendency to result in banks taking more risk in ways that are not immediately obvious. Supervisors can of course intervene to stop this happening but their already difficult job is made harder when banks come under pressure to lift returns. This is not to challenge the “unquestionably strong” benchmark adopted by APRA but simply to note that more is not always better.

Another problem with just adding more capital is that the capital has to be usable in the sense that the capital ratio needs to be able to decline as capital is consumed by elevated losses without the bank coming under pressure to immediately restore the level of capital it is expected to hold. The FPC strategy of setting out how it expects capital ratios to increase or decrease depending on the state of the financial cycle helps create an environment in which this can happen.

Mapping the BOE approach to Australia

APRA has set the CCyB at 0% whereas the BOE approach would suggest a value of at least 1% and possibly more given that APRA has felt the need to step in to cool the market down. It is important to note that transitioning to a FPC style CCyB does not necessarily require that Australian banks need to hold more capital. One option would be to harmonise one or more elements of APRA’s approach to capital measurement (thereby increasing the reported capital ratio) and to “reinvest” the surplus capital in a CCyB. The overall quantum of capital required to be unquestionably strong would not change but the form of the capital would be more usable to the extent that it could temporarily decline and banks had more time to rebuild  the buffer during the recovery phase.

Summing up

A capital adequacy framework that includes a CCyB that is varied in a semi predictable manner over the course of the financial cycle would be far more resilient than the one we currently have that offers less flexibility and is more exposed to the risk of being too late or missing the escalation of systemic risk all together.

Tell me what I am missing …

APRA’s proposed revisions to capital requirements for residential mortgages

… there is a lot to like in what APRA have proposed but also some issues that would benefit from further thought

Many readers will be aware that APRA released a Discussion Paper (DP) last week titled “Revisions to the capital framework for authorised deposit-taking institutions”.   The paper sets out APRA’s proposed changes to ADI capital requirements defined by the Internal Ratings Based Approach (IRB) and Standardised Approach to Credit Risk, Interest Rate Risk in the Banking Book (IRRBB) and Operational Risk. The focus of this post will be the proposals impacting credit risk capital requirements for residential mortgage lending. This post presupposes that the reader is familiar with the detail of what APRA has proposed. For those of you who have not yet got around to reading the whole paper I have added a short summary of the proposals below (see “APRA’s proposals – in more detail”).

My gut reaction is that there is a lot to like in what APRA have proposed but there are also issues that deserve further consideration in order to address the risk of unintended consequence and to better deliver on the objectives of consistency, transparency and competitive neutrality.

Proposals which make sense to me:

  • The increased risk sensitivity of the proposed standardised RWs for residential mortgages is, I believe, a material enhancement of the capital adequacy framework
  • There are arguments (and indeed evidence) for why investor property loans can be as low risk as owner occupier loans (most of the  time) but APRA’s desire to address the systemic tail risk of this form of lending is I think an understandable policy objective for a prudential regulator to pursue
  • Continuing to pursue higher IRB RW via changes to the correlation factor also looks to be a better approach than the 20% floor on LGD currently applied and thankfully also up for revision
  • Applying a higher correlation factor to low PD loans also makes intuitive sense, especially if your primary concern is the systemic risk associated with the residential mortgage lending that dominates the balance sheets of your banking system
  • In addition, the potential for the correlation adjustment to reduce the sensitivity of residential mortgage RWA to the economic cycle (and hence reduce the risk of pro-cyclical stress on capital ratios) is particularly welcome though I believe there is much more to do on this general issue
  • The support for Lender’s Mortgage Insurance (LMI) is also welcome

Areas where I believe the proposed revised capital framework could be improved (or at least benefit from some more thought):

  • The discussion of relative standardised and IRB RW does not address the fact IRB banks are required to hold additional capital to cover any shortfall between loan loss provisions and Regulatory Expected Loss (REL)
  • Residential mortgage portfolios subject to the standardised approach should be subject to a minimum average RW in the same way that IRB portfolios are currently constrained by the 25% floor
  • Applying a fixed scalar to Credit RWA can be problematic as the composition of the loan portfolio continues to evolve

The discussion of comparative IRB and Standardised RW you typically encounter seems to assume that the two approaches are identical in every aspect bar the RW but people working at the coal face know that the nominal RW advantage the IRB banks have has been partly offset by a higher exposure measure the RW are applied to. It appears that APRA’s proposed revisions will partly address this inconsistency by requiring banks using the Standardised Approach to apply a 100% Credit Conversion Factor (CCF) to undrawn loan limits.  IRB banks are also required to take a Common Equity Tier 1 deductions for the shortfall between their loan loss provisions and REL. The proposed revisions do nothing to address this area of inconsistency and in fact the Discussion Paper does not even acknowledge the issue.

Residential mortgage portfolios subject to the standardised approach should be subject to a minimum average RW in the same way that IRB portfolios are constrained. The majority of new residential mortgages are originated at relatively high LVR (most at 70% plus and a significant share at 80% plus), but the average LVR will be much lower as principal is repaid (and even more so if you allow for the appreciation of property values).  The introduction of a 20% RW bucket for standardised banks poses the question whether these banks will have an advantage in targeting the refinancing of seasoned loans with low LVR’s. The IRB banks would seek to retain these customers but they will still be constrained by the 25% average RW mandated by the FSI while the standardised banks face no comparable constraint.

This is unlikely to be an issue in the short term but one of the enduring lessons learned during my time “on the inside” is that banks (not just the big ones) are very good at identifying arbitrages and responding to incentives. It is widely recognised that housing loans have become the largest asset on Australian bank balance sheets (The Royal Commission issued a background paper that cited 42% of assets as at September 2017) but the share was significantly less when I started in banking. There has been a collection of complex drivers at play here (a topic for another post) but the relatively low RW has not harmed the growth of this kind of lending. Consequently, it is dangerous to assume that the status quo will persist if incentives exist to drive a different outcome.

This competitive imbalance could be addressed quite simply if the standardised banks were also subject to a requirement that their average RW was also no lower than 25% (or some alternative floor ratio that adjusted for the differences in exposure and REL noted above).

Another lesson learned “on the inside” is that fixed scalars look simple but are often not. They work fine when the portfolio of assets they are scaling up is stable but will gradually generate a different outcome to what was intended as the composition of the loan book evolves over time. I don’t have an easy solution to this problem but, if you must use them, it helps to recognise the potential for unintended consequence at the start.

Read on below if you have not read the Discussion Paper or want more detail on the revisions APRA has proposed and how these changes are proposed to be reconciled with the FSI recommendation. This is my first real post so feedback would be much appreciated.

Above all, tell me what I am missing … 

Tony

Note: The original version of this post published 22 February 2018 stated that inconsistent measurement of the exposures at default between the standardised and IRB approaches  was not addressed by APRA’s proposed revisions. I believe now that the proposed application of a 100% CCF in the Standardised Approach would in fact address one of the areas of inconsistency. The treatment of Regulatory Expected Loss remains an issue however. The post was revised on 24 February to clarify these points.

APRA’s proposals – in more detail

Good quality loans fully secured by mortgages on occupied residential property (either rented or occupied by the borrower) have been assigned concessionary risk weights (RW) ever since risk weighted capital adequacy ratios were introduced under Basel I (1988). The most concessionary risk weight was initially set at 50% and reduced to 35% in the Basel II Standardised Approach (2006).

APRA currently applies the concessionary 35% RW to standard eligible mortgages with Loan Valuation Ratios (LVR) of 80% or better (or up to 90% LVR if covered by Lender’s Mortgage Insurance) while the best case scenario for a non-standard mortgage is a 50% RW. Progressively higher RW (50/75/100) are applied for higher risk residential mortgages.

Under the Standardised Approach, APRA proposes:

  • The classification of a Standard Eligible Mortgage will distinguish between lowest risk “Owner-occupied P&I” and a higher risk “Other residential mortgages” category which is intended to be conceptually similar to the “material dependence” concept employed by Basel III to distinguish loans where repayment depends materially on the cash flows generated by the property securing the loan
  • 6 RW bands for each of these two types of residential mortgage (compared to 5 bands currently)
  • Standard Eligible Mortgages with lower LVR loans to be assigned lower RW but these loans must also meet defined serviceability, marketability and valuation criteria to qualify for the concessionary RW
  • The higher RW applied to “Other residential mortgages” may take the form of a fixed risk-weight schedule (per the indicative RW in Table 3 of the Discussion Paper) but might also be implemented via a multiplier, applied to the RW for owner-occupied P&I loans, which might vary over time “… depending on prevailing prudential or financial stability objectives or concerns”
  • Relatively lower capital requirements to continue to apply where loans are covered by LMI but its preferred approach is to apply a RW loading to loans with LVR in excess of 80% that are not insured (i.e. the indicative RW in Table 3 assume that LMI covers the high LVR loans)
  • Non-Standard residential mortgages should no longer benefit from any RW concession and be assigned a flat 100% RW irrespective of LVR and LMI

While the IRB requirements impacting residential mortgages are largely unchanged under Basel III, APRA proposes the following changes to the Australian IRB Approach to reflect local requirements and conditions:

  • Increased capital requirements for investment and interest-only exposures; to be implemented via a higher correlation factor for these loans
  • The (currently fixed) correlation factor applied to residential mortgages to be amended to depend on probability of default (PD); reflecting empirical evidence that “… the default risk of lower PD exposures is more dependent on the economic cycle  and can consequently increase at a relatively higher rate in a downturn”
  • A reduction in the minimum Loss Given Default (LGD) from 20% to 10% (subject to APRA approval of the LGD model); in order to facilitate “… better alignment of LGD estimates to key drivers of loss such as LVR and LMI”
  • Capital requirements for non-standard mortgages use the standardised approach; increasing consistency between the IRB an standardised approaches

APRA’s proposals seek to strike a balance between risk sensitivity and simplicity but must also take account of the FSI recommendations that ADI capital levels be unquestionably strong while also narrowing the difference between standardised and IRB RWs for residential mortgages. APRA is undertaking a Quantitative Impact Study (QIS) to better understand the impact of its proposals but the DP flagged that APRA does not expect the changes to correlation factors to meet its objectives for increased capital for residential mortgage exposures.

APRA could just further ramp up the correlation factor to generate the target IRB RW (which I assume continues to be 25%) but the DP notes that this would create undesirable inconsistencies with the correlation factors applied to other asset classes. Consequently, the DP indicates that the target increase in IRB RWA will likely be pursued via

  • A fixed multiplier (scalar) applied to total Credit RWA (i.e. althoughBasel III removes the 1.06 Credit RWA scalar, APRA is considering retaining a scalar with a value yet to be determined); and
  • If necessary, by applying additional specific RWA scalars for residential (and commercial) property.

These scalars will be subject to consultation with the industry and APRA has committed to review the 10.5% CET1 benchmark for unquestionably strong capital should the net result of the proposed revisions result in an overall increase in RWA’s relative to current methodologies.