The “skin in the game” argument for more common equity

One of the traditional arguments for higher common equity requirements is that it increases the shareholders’ “skin in the game” and thereby creates an incentive to be more diligent and conservative in managing risk.

This principle is true up to a point but I believe more common equity mostly generates this desirable risk management incentive when the extra skin in the game (aka capital) is addressing a problem of too little capital. It is much less obvious that more capital promotes more conservative risk appetite for a bank that already has a strong capital position.

In the “too little” capital scenarios, shareholders confronted with a material risk of failure, but limited downside (because they have only a small amount of capital invested), have an incentive to take large risks with uncertain payoffs. That is clearly undesirable but it is not a fair description of the risk reward payoff confronting bank shareholders who have already committed substantial increased common equity in response to the new benchmarks of what it takes to be deemed a strong bank.

The European Systemic Risk Board published some interesting research on this question in a paper titled “Has regulatory capital made banks safer? Skin in the game vs moral hazard” . I have copied the abstract below which summarises the key conclusions.

Abstract: The paper evaluates the impact of macroprudential capital regulation on bank capital, risk taking behaviour, and solvency. The identification relies on the policy change in bank-level capital requirements across systemically important banks in Europe. A one percentage point hike in capital requirements leads to an average CET1 capital increase of 13 percent and no evidence of reduction in assets. The increase in capital comes at a cost. The paper documents robust evidence on the existence of substitution effects toward riskier assets. The risk taking behavior is predominantly driven by large and less profitable banks: large wholesale funded banks show less risk taking, and large banks relying on internal ratings based approach successfully disguise their risk taking. In terms of overall impact on solvency, the higher risk taking crowds-out the positive effect of increased capital.

I have only skimmed the paper thus far and have reservations regarding how they measure increased risk. As I understand it, the increased riskiness the analysis measures is based on increases in average risk weights. It was not clear how the analysis distinguished changes in portfolio riskiness from changes in the risk weight measure. That said, the overall conclusions seem intuitively right.

Tony

What is the value of information in the money market?

“Debt and institutions dealing with debt have two faces: a quiet one and a tumultuous one …. The shift from an information-insensitive state where liquidity and trust prevails because few questions need to be asked, to an information-sensitive state where there is a loss of confidence and a panic may break out is part of the overall system: the calamity is a consequence of the quiet. This does not mean that one should give up on improving the system. But in making changes, it is important not to let the recent crisis dominate the new designs. The quiet, liquid state is hugely valuable.”

Bengt Holmstrom (2015)

The quote above comes from an interesting paper by Bengt Holmstrom that explores the ways in which the role money markets play in the financial system is fundamentally different from that played by stock markets. That may seem like a statement of the obvious but Holmstrom argues that some reforms of credit markets which based on the importance of transparency and detailed disclosure are misconceived because they do not reflect these fundamental differences in function and mode of operation.

Holmstrom argues that the focus and purpose of stock markets is price discovery for the purpose of allocating risk efficiently. Money markets, in contrast are about obviating the need for price discovery in order to enhance the liquidity of the market. Over-collateralisation is one of the features of the money market that enable deep, liquid trading to occur without the need to understand the underlying risk of the assets that are being funded .

 “The design of money market policies and regulations should recognise that money markets are very different from stock markets. Lessons from the latter rarely apply to the former, because markets for risk-sharing and markets for funding have their own separate logic. The result is two coherent systems with practices that are in almost every respect polar opposites.”

From “Understanding the role of debt in the financial system” Bengt Holmstrom (BIS Working Papers No 479 – January 2015)

Holmstrom appears to have written the paper in response to what he believes are misconceived attempts to reform credit markets in the wake of the recent financial crisis. These reforms have often drawn on insights grounded in our understanding of stock markets where information and transparency are key requirements for efficient price discovery and risk management. His paper presents a perspective on the logic of credit markets and the structure of debt contracts that highlights the information insensitivity of debt. This perspective explains among other things why he believes that information insensitivity is the natural and desired state of the money markets.

Holmstrom notes that one of the puzzles of the GFC was how people traded so many opaque instruments with a seeming ignorance of their real risk. There is a tendency to see this as a conspiracy by bankers to confuse and defraud customers which in turn has prompted calls to make money market instruments more transparent. While transparency and disclosure is essential for risk pricing and allocation, Holmstrom argues that this is not the answer for money markets because they operate on different principles and serve a different function.

 “I will argue that a state of “no questions asked” is the hallmark of money market liquidity; that this is the way money markets are supposed to look when they are functioning well.”

“Among economists, the mistake is to apply to money markets the lessons and logic of stock markets.”

“The key point I want to communicate today is that these two markets are built on two entirely different, one could say diametrically opposite, logics. Ultimately, this is because they serve two very different purposes. Stock markets are in the first instance aimed at sharing and allocating aggregate risk. To do that effectively requires a market that is good at price discovery.

 “But the logic behind transparency in stock markets does not apply to money markets. The purpose of money markets is to provide liquidity for individuals and firms. The cheapest way to do so is by using over-collateralised debt that obviates the need for price discovery. Without the need for price discovery the need for public transparency is much less. Opacity is a natural feature of money markets and can in some instances enhance liquidity, as I will argue later.”

“Why does this matter? It matters because a wrong diagnosis of a problem is a bad starting point for remedies. We have learned quite a bit from this crisis and we will learn more. There are things that need to be fixed. But to minimise the chance of new, perhaps worse mistakes, we need to analyse remedies based on the purpose of liquidity provision. In particular, the very old logic of collateralised debt and the natural, but sometimes surprising implications this has for how information and risk are handled in money markets, need to be properly appreciated.”

There is a section of the paper titled “purposeful opacity” which, if I understood him correctly, seemed to extend his thesis on the value of being able to trade on an “information insensitive” basis to argue that “opacity” in the debt market is something to be embraced rather than eliminated. I struggled with embracing opacity in this way but that in no way diminishes the validity of the distinction he draws between debt and equity markets.

The other useful insight was the way in which over-collateralistion (whether explicit or implicit) anchors the liquidity of the money market. His discussion of why the sudden transition from a state in which the creditworthiness of a money market counter-party is taken for granted to one in which doubt emerges also rings true.

The remainder of this post mostly comprises extracts from the paper that offer more detail on the point I have summarised above. The paper is a technical one but worth the effort for anyone interested in the question of how banks should finance themselves and the role of debt in the financial system.

Money markets versus stock markets

Holmstrom argues that each system displays a coherent internal logic that reflects its purpose but these purposes are in many respects polar opposites.

Stock markets are primarily about risk sharing and price discovery. As a consequence, these markets are sensitive to information and value transparency. Traders are willing to make substantial investments to obtain this information. Liquidity is valuable but equity investors will tend to trade less often and in lower volumes than debt markets.

Money markets, in contrast, Holmstrom argues are primarily about liquidity provision and lending. The price discovery process is much simpler but trading is much higher volume and more urgent.

“The purpose of money markets is to provide liquidity. Money markets trade in debt claims that are backed, explicitly or implicitly, by collateral.

 “People often assume that liquidity requires transparency, but this is a misunderstanding. What is required for liquidity is symmetric information about the payoff of the security that is being traded so that adverse selection does not impair the market. Without symmetric information adverse selection may prevent trade from taking place or in other ways impair the market (Akerlof (1970)).”

“Trading in debt that is sufficiently over-collateralised is a cheap way to avoid adverse selection. When both parties know that there is enough collateral, more precise private information about the collateral becomes irrelevant and will not impair liquidity.”

The main purpose of stock markets is to share and allocate risk … Over time, stock markets have come to serve other objectives too, most notably governance objectives, but the pricing of shares is still firmly based on the cost of systemic risk (or a larger number of factors that cannot be diversified). Discovering the price of systemic risk requires markets to be transparent so that they can aggregate information efficiently.     

Purposeful opacity

“Because debt is information-insensitive … traders have muted incentives to invest in information about debt. This helps to explain why few questions were asked about highly rated debt: the likelihood of default was perceived to be low and the value of private information correspondingly small.”

 Panics: The ill consequences of debt and opacity

“Over-collateralised debt, short debt maturities, reference pricing, coarse ratings, opacity and “symmetric ignorance” all make sense in good times and contribute to the liquidity of money markets. But there is a downside. Everything that adds to liquidity in good times pushes risk into the tail. If the underlying collateral gets impaired and the prevailing trust is broken, the consequences may be cataclysmic”

“The occurrence of panics supports the informational thesis that is being put forward here. Panics always involve debt. Panics happen when information-insensitive debt (or banks) turns into information-sensitive debt … A regime shift occurs from a state where no one feels the need to ask detailed questions, to a state where there is enough uncertainty that some of the investors begin to ask questions about the underlying collateral and others get concerned about the possibility”

These events are cataclysmic precisely because the liquidity of debt rested on over-collateralisation and trust rather than a precise evaluation of values. Investors are suddenly in the position of equity holders looking for information, but without a market for price discovery. Private information becomes relevant, shattering the shared understanding and beliefs on which liquidity rested (see Morris and Shin (2012) for the general mechanism and Goldstein and Pauzner (2005) for an application to bank runs).

Would transparency have helped contain the contagion?

“A strong believer in the informational efficiency of markets would argue that, once trading in credit default swaps (CDS) and then the ABX index began, there was a liquid market in which bets could be made both ways. The market would find the price of systemic risk based on the best available evidence and that would serve as a warning of an imminent crisis. Pricing of specific default swaps might even impose market discipline on the issuers of the underlying debt instruments”

 Shadow banking

 “The rapid growth of shadow banking and the use of complex structured products have been seen as one of the main causes of the financial crisis. It is true that the problems started in the shadow banking system. But before we jump to the conclusion that shadow banking was based on unsound, even shady business practices, it is important to try to understand its remarkable expansion. Wall Street has a hard time surviving on products that provide little economic value. So what drove the demand for the new products?”

 “It is widely believed that the global savings glut played a key role. Money from less developed countries, especially Asia, flowed into the United States, because the US financial system was perceived to be safe … More importantly, the United States had a sophisticated securitisation technology that could activate and make better use of collateral … Unlike the traditional banking system, which kept mortgages on the banks’ books until maturity, funding them with deposits that grew slowly, the shadow banking system was highly scalable. It was designed to manufacture, aggregate and move large amounts of high-quality collateral long distances to reach distant, sizable pools of funds, including funds from abroad.”

“Looking at it in reverse, the shadow banking system had the means to create a lot of “parking space” for foreign money. Securitisation can manufacture large amounts of AAA-rated securities provided there is readily available raw material, that is, assets that one can pool and tranche”

“I am suggesting that it was an efficient transportation network for collateral that was instrumental in meeting the global demand for safe parking space.”

 “The distribution of debt tranches throughout the system, sliced and diced along the way, allowing contingent use of collateral”

“Collateral has been called the cash of shadow banking (European repo council (2014)). It is used to secure large deposits as well as a host of derivative transactions such as credit and interest rate swaps.”  

There is a relatively recent, but rapidly growing, body of theoretical research on financial markets where the role of collateral is explicitly modelled and where the distinction between local and global collateral is important

“Viewed through this theoretical lens, the rise of shadow banking makes perfectly good sense. It expanded in response to the global demand for safe assets. It improved on traditional banking by making collateral contingent on need and allowing it to circulate faster and attract more distant capital. In addition, securitisation created collateral of higher quality (until the crisis, that is) making it more widely acceptable. When the crisis hit, bailouts by the government, which many decry, were inevitable. But as just discussed, the theory supports the view that bailouts were efficient even as an ex ante policy (if one ignores potential moral hazard problems). Exchanging impaired collateral for high-quality government collateral, as has happened in the current crisis (as well as historically with clearing houses), can be rationalised on these grounds.”

 Some policy implications

 A crisis ends only when confidence returns. This requires getting back to the no-questions-asked state ….

Transparency would likely have made the situation worse

“By now, the methods out of a crisis appear relatively well understood. Government funds need to be committed in force (Geithner (2014)). Recapitalisation is the only sensible way out of a crisis. But it is much less clear how the banking system, and especially shadow banking, should be regulated to reduce the chance of crisis in the first place.  The evidence from the past panic suggests that greater transparency may not be that helpful.”

“The logic of over-capitalisation in money markets leads me to believe that higher capital requirements and regular stress tests is the best road for now.”

“Transparency can provide some market discipline and give early warning of trouble for individual banks. But it may also lead to strategic behaviour by management. The question of market discipline is thorny. In good times market discipline is likely to work well. The chance that a bank that is deemed risky would trigger a panic is non-existent and so the bank should pay the price for its imprudence. In bad times the situation is different. The failure of a bank could trigger a panic. In bad times it would seem prudent to be less transparent with the stress tests (for some evidence in support of this dichotomy, see Machiavelli (1532)).”

The Bankers’ New Clothes: Arguments for simpler capital and much reduced leverage

It always pays to make sure you expose yourself to the opposite view. This post looks at some of the arguments for simpler and higher bank capital requirements put forward by Professors Admati and Hellwig. They have published a number of papers and a book on the topic but this post refers chiefly to their book “The Bankers’ New Clothes” and to a paper ‘The Parade of the Banker’s New Clothes Continues: 31 Flawed Claims Debunked”. As I understand it, the key elements of their argument are that:

  • Banks are inherently risky businesses,
  • Excessive borrowing by banks increases their inherent riskiness, but
  • Banks are only able to maintain this excessive level of borrowing because
    • Flawed risk based capital models underestimate the true capital requirements of the business
    • Market discipline also allows excessive borrowing because it is assumed that the government will bail out banks if the situation turns out badly

They identify a variety of ways of dealing with the problem of excessive leverage (controls on bank lending, liquidity requirements and capital requirements) but argue that substantially more common equity is the best solution because:

  • It directly reduces the probability that a bank will fail (i.e. all other things being equal, more common equity reduces the risk of insolvency),
  • A higher level of solvency protection has the added benefit of also reducing the risk of illiquidity, and
  • Contrary to claims by the banking industry, there is no net cost to society in holding more common equity because the dilution in ROE will be offset by a decline in the required return on equity

They concede that there will be some cost associated with unwinding the Too Big To Fail (TBTF) benefit that large banks currently enjoy on both the amount banks can borrow and on the cost of that funding but argue there is still no net cost to society in unwinding this undeserved subsidy. The book, in particular, gets glowing reviews for offering a compelling case for requiring banks to operate with much lower levels of leverage and for pointing out the folly of risk based capital requirements.

There are a number of areas where I find myself in agreement with the points they argue but I can’t make the leap to accept their conclusion that much a higher capital requirement based on a simple leverage ratio calculation is the best solution. I have written this post to help me think through the challenges they offer my beliefs about how banks should be capitalised.

It is useful, I think, to first set out the areas where we (well me at least) might agree in principle with what they say; i.e.

  • Financial crises clearly do impose significant costs on society and excessive borrowing does tend to make a financial system fragile (the trick is to agree what is “excessive”)
  • Better regulation and supervision have a role to play in minimising the risk of bank failure (i.e. market discipline alone is probably not enough)
  • Public policy should consider all costs, not just those of the banking industry
  • All balance sheets embody a trade-off between enterprise risk, return and leverage (i.e. increasing leverage does increase risk)

It is less clear however that:

  • The economics of bank financing are subject to exactly the same rules as that which apply to non-financial companies (i.e. rather than asserting that banks should be compared with non-financial companies, it is important to understand how banks are different)
  • A policy of zero failure for banks is necessarily the right one, or indeed even achievable (i.e. would it be better to engineer ways in which banks can fail without dragging the economy down with them)
  • Fail safe mechanisms, such as the bail in of pre-positioned liabilities, have no prospect of working as intended
  • The assertion that “most” of the new regulation intended to make banks safer and easier to resolve has been “rejected, diluted or delayed” is a valid assessment of what has actually happened under Basel III
  • That liquidity events requiring lender of last resort support from the central bank are always a solvency problem

Drawing on some previous posts dealing with these issues (see here, here and here), I propose to focus on the following questions:

  • How does the cost of bank financing respond to changes in leverage?
  • Are the risk based capital requirements as fundamentally flawed as the authors claim?
  • Are risk management incentives for bankers always better when they are required to hold increasing levels of common equity?
  • Do the increased loss absorption features of Basel III compliant hybrids (in particular, the power to trigger conversion or bail in of the instruments) offer a way to impose losses on failed banks without disrupting the economy or requiring public support

How does leverage affect the cost of bank financing?

Increasing the proportion of equity funding, the authors argue, reduces the risk that shareholders are exposed to because each dollar of equity they have invested

“ will be affected less intensely by the uncertainty associated with the investments”

“when shareholders bear less risk per dollar invested, the rate of return they require is lower”

“Therefore, taking the costs of equity as fixed and independent of the mix of equity and debt involves a fundamental fallacy”.

Banker’sNew Clothes (p101)

The basic facts they set out are not really contentious; the mix of debt and equity does impact required returns. The authors focus on what happens to common equity but changing leverage impacts both debt and equity. This is very clear in the way that rating agencies consider all of the points nominated by the authors when assigning a debt rating. Reduced equity funding will likely lead to a decline in the senior and subordinated debt ratings and higher costs (plus reduced access to funding in absolute dollar terms) while higher equity will be a positive rating factor.

Banks are not immune to these fundamental laws but it is still useful to understand how the outcomes are shaped by the special features of a bank balance sheet. My views here incorporate two of the claims they “debunk” in their paper; specifically

Flawed Claim #4: The key insights from corporate finance about the economics of funding, including those of Modigliani and Miller, are not relevant for banks because banks are different from other companies

Flawed Claim #5: Banks are special because they create money

One of the features that defines a bank is the ability to take deposits. The cost of deposits however tends to be insulated from the effects of leverage. This is a design feature. Bank deposits are a major component of the money supply but need to be insensitive to adverse information about the issuing bank to function as money.

Wanting bank deposits to be information insensitive does not make them so. That is a function of their super senior position in the liability loss hierarchy, supplemented in many, if not most, banking systems by some form of limited deposit insurance (1). I credit a paper by Gary Gorton and George Pennacchi titled “Financial Intermediaries and Liquidity Creation” for crytalising this insight (an earlier post offers a short summary of that paper). Another paper titled “Why Bail-In? And How?” by Joseph Sommer proposes a different rationale for deposits having a super senior position insulated from the risk of insolvency but the implications for the impact of leverage on bank financing costs are much the same.

A large bank also relies on senior unsecured financing. This class of funding is more risky than deposits but still typically investment grade. This again is a design feature. Large banks target an investment grade rating in order to deliver, not only competitive financing costs, but equally (and perhaps more importantly) access to a larger pool of potential funding over a wider range of tenors. The investment grade rating depends of course on there being sufficient loss absorbing capital underwriting that outcome. There is no escaping this law of corporate finance. 

The debt rating of large banks is of course also tied up with the issue of banks being treated as Too Big To Fail (TBTF). That is a distortion in the market that needs to be addressed and the answer broadly is more capital though the rating agencies are reasonably agnostic on the form this capital should take in so far as the senior debt rating is concerned. Subject to having enough common equity anchoring the capital structure, more Tier 2 subordinated debt (or Tier 3 bail-in) will work just as well as more common equity for the purposes of reducing the value of implied government support currently embedded in the long term senior debt rating.

Admati and Hellwig are right – there is no free lunch in corporate finance

At this stage, all of this risk has to go somewhere. On that point I completely agree with Admati and Hellwig. There is no free lunch, the rating/risk of the senior tranches of financing depend on having enough of the right kinds of loss absorbing capital standing before them in the loss hierarchy. Where I part company is on the questions of how much capital is enough and what form it should take.

How much capital is (more than) enough?

Admati and Hellwig’s argument for more bank capital has two legs. Firstly, they note that banks are typically much more leveraged than industrial companies and question how can this be given the fundamental law of capital irrelevancy defined by Modigliani and Miller. Secondly, they argue that risk based capital requirements are fundamentally flawed and systematically under estimate how much capital is required.

Why are banks different?

Admati and Hellwig note that banks have less capital than industrial companies and conclude that this must be a result of the market relying on the assumption that banks will be bailed out. The existence of a government support uplift in the senior debt ratings of large banks is I think beyond debate. There is also broad support (even amongst many bankers) that this is not sound public policy and should ideally be unwound.

It is not obvious however that this wholly explains the difference in observed leverage. Rating agency models are relatively transparent in this regard (S&P in particular) and the additional capital required to achieve a rating uplift equivalent to the existing government support factor would still see banks more leveraged than the typical industrial company. Bank balance sheets do seem to be different from those of industrial companies.

Flawed risk models

The other leg to their argument is that risk based capital fundamentally under estimates capital requirements. I am broadly sympathetic to the sceptical view on how to use the outputs of risk models and have been for some time. An article I wrote in 2008, for example, challenged the convention of using a probability of default associated with the target debt rating to precisely calibrate the amount of capital a bank required.

The same basic concept of highly precise, high confidence level capital requirements is embedded in the Internal Ratings Based formula and was part of the reason the model results were misinterpreted and misused. Too many people assigned a degree of precision to the models that was not warranted. That does not mean however that risk models are totally useless.

Professors Admati and Hellwig use simple examples (e.g. how does the risk of loss increase if a personal borrower increases leverage on a home loan) to argue that banks need to hold more capital. While the basic principle is correct (all other things equal, leverage does increase risk), the authors’ discussion does not draw much (or possibly any?) attention to the way that requiring a borrower to have equity to support their borrowing reduces a bank’s exposure to movements in the value of the loan collateral.

In the examples presented, any decline in the value of the assets being financed flows through directly to the value of equity, with the inference that this would be true of a bank also. In practice, low risk weights assigned by banks to certain (low default – well secured) pools of lending reflect the existence of borrower’s equity that will absorb the first loss before the value of the loan itself is called into question.

A capital requirement for residential mortgages (typically one of the lowest risk weights and also most significant asset classes) that looks way too low when you note that house prices can easily decline by 10 or 20%, starts to make more sense when you recognise that that there is (or should be) a substantial pool of borrower equity taking the brunt of the initial decline in the value of collateral. The diversity of borrowers is also an important factor in reducing the credit risk of the exposures (though not necessarily the systemic risk of an overall meltdown in the economy). Where that is not the case (and hence the renewed focus on credit origination standards and macro prudential policy in general), then low risk weights are not justified.

I recognise that this argument (incorporating the value of the borrower’s equity) does not work for traded assets where the mark to market change in the value of the asset flows directly to the bank’s equity. It does however work for the kinds of assets on bank balance sheets that typically have very low risk weights (i.e. the primary concern of the leverage ratio advocates). It also does not preclude erring on the side of caution when calculating risk weights so long as the model respects the relative riskiness of the various assets impacting the value of equity.

How much also depends on the quality of risk management (and supervision)

The discussion of how much capital a bank requires should also recognise the distinction between how much a well managed bank needs and how much a poorly managed bank needs. In a sense, the authors are proposing that all banks, good and bad, should be made to hold the capital required by bad banks. Their focus on highlighting the risks of banking obscures the fact that prudent banking mitigates the downside and that well managed banks are not necessarily consigned to the extremes of risk the authors present as the norm of banking.

While not expressed in exactly that way, the distinction I am drawing is implicit in Basel III’s Total Loss Absorbing Capital (TLAC) requirements now being put in place. TLAC adds a substantial layer of additional loss absorption on top of already substantially strengthened common equity requirements. The base layer of capital can be thought of as what is required for a well managed, well supervised bank with a sound balance sheet and business model. APRA’s “Unquestionably Strong” benchmark for CET1 is a practical example of what this requirement looks like. The problem of course is that all banks argue they are good banks but the risk remains that they are in fact bad banks and we usually don’t find out the difference until it is too late. The higher TLAC requirement provides for this contingency.

What should count as capital?

I looked at this question in a recent post on the RBNZ’s proposal that virtually all of their TLAC requirement should be comprised of common equity. Admati and Hellwig side with the RBNZ but I believe that a mix of common equity and bail-in capital (along the lines proposed by APRA) is the better solution.

Read my earlier post for the long version, but the essence of my argument is that bail-in capital introduces a better discipline over bank management risk appetite than does holding more common equity. Calibrating common equity requirements to very high standards should always be the foundation of a bank capital structure. Capital buffers in particular should be calibrated to withstand very severe external shocks and to be resilient against some slippage in risk management.

The argument that shareholders’ need to have more “skin in the game” is very valid where the company is undercapitalised. Bail-in capital is not a substitute for getting the basics right. A bank that holds too little common equity, calibrated to an idealised view of both its own capabilities and of the capacity of the external environment to surprise the modellers, will likely find itself suppressing information that does not fit the model. Loss aversion then kicks in and management start taking more risk to win back that which was lost, just as Admati and Hellwig argue.

However, once you have achieved a position that is unquestionably strong, holding more common equity does not necessarily enhance risk management discipline. My experience in banking is that it may in fact be more likely to breed an undesirable sense of complacency or even to create pressure to improve returns. I know that the later is not a a winning strategy in the long run but in the short run the market frequently does not care.

What is the minimum return an equity investor requires?

One of the problems I find with a simplistic application of Modigliani & Miller’s (M&M) capital irrelevancy argument is that it does not seem to consider if there is a minimum threshold return for an equity investment below which the investment is no longer sufficiently attractive to investors who are being asked to take first loss positions in a company; i.e. where is the line between debt and equity where a return is simply not high enough to be attractive to equity investors?

Reframing the question in this way suggests that the debate between the authors and the bankers may be more about whether risk based capital adequacy models (including stress testing) can be trusted than it is about the limitations of M&M in the real world.

Summary

The author’s solution to prudential supervision of banks is a shock and awe approach to capital that seeks to make the risk of insolvency de minimus for good banks and bad. I have done my best to be open to their arguments and indeed do agree with a number of them. My primary concern with the path they advocate is that I do not believe the extra “skin in the game” generates the risk management benefits they claim.

I see more potential in pursuing a capital structure based on

  • a level of common equity that is robustly calibrated to the needs of a well managed (and well supervised) bank
  • incorporating a well designed counter cyclical capital buffer,
  • supplemented with another robust layer of bail-in capital that imposes real costs (and accountability) on the shareholders and management of banks for whom this level of common equity proves insufficent.

The authors argue that the authorities would never use these bail-in powers for fear of further destabilising funding markets. This is a valid area of debate but I believe they conflate the risks of imposing losses on bank depositors with the kinds of risks that professional bond investors have traditionally absorbed over many centuries of banking. The golden era in which the TBTF factor shielded bank bondholders from this risk is coming to the end but this broader investment class of bond holders has dealt with defaults by all kinds of borrowers. I am not sure why banks would be special in this regard if countries can default. The key issue is that the investors enter into the contract with the knowledge that they are at risk and are being paid a risk premium commensurate with the downside (which may not be that large if investors judge the banks to be well managed).

This is a complex topic so please let me know if I have missed something fundamental or have otherwise mis-represented Admati and Hellwig’s thesis. In the interim, I remain mostly unconvinced …

Tony

  1. It is worth noting that NZ has adopted a different path with respect to deposit protection, rejecting both deposit preference and deposit insurance. They also have a unique policy tool (Open Bank Resolution) that allows the RBNZ to impose losses on deposits as part of the resolution process. They are reviewing the case for deposit insurance and I believe should also reconsider deposit preference.

What should count as bank capital?

My last post looked at a RBNZ consultation paper which addressed the question “How much capital is enough?”. The overall quantum of capital the RBNZ arrived at (16% of RWA plus) seemed reasonable but it was less obvious that relying almost entirely on CET1 was the right solution. That prompted me to revisit an earlier consultation paper in which the RBNZ set out its case for why it did not want contingent capital instruments to play a significant role in the capital structure of the banks it supervises. This post explores the arguments the RBNZ marshals to support its position as part of a broader exploration of the debate over what counts as capital.

The traditional approach to this question assumes that common equity is unquestionably the best form of capital from the perspective of loss absorption. Consequently, the extent to which alternative forms of funding count as capital is judged by common equity benchmarks; e.g. the extent to which the funding is a permanent commitment (i.e. no maturity date) and the returns paid to investors depend on the profitability or capacity of the company to pay (failure to pay is not an event of default).

There is no dispute that tangible common equity unquestionably absorbs loss and is the foundation of any company’s capital structure but I believe contingent convertible capital instruments do potentially add something useful to the bank capital management toolkit. I will attempt to make the case that a foundation of common equity, supplemented with some debt that converts to common equity if required, is better than a capital structure comprised solely or largely of common equity.

The essence of my argument is that there is a point in the capital structure where adding contingent convertible instruments enhances market discipline relative to just adding more common equity. The RBNZ discusses the potential value of these structures in their consultation paper:

49. The theoretical literature on contingent debt explores how these instruments might reduce risk (i.e. lower the probability of insolvency) for an individual bank.  

50. Two effects have been identified. Firstly, adding contingent debt to a bank’s balance sheet directly increases the loss absorbing potential of the bank, relative to issuing pure debt (but not relative to acquiring more common equity). This follows directly from the fact that removing the debt is an essential part of every contingent debt instrument. Secondly, depending on the terms, contingent capital may cause bank management to target a lower level of risk (incentive effects). In other words, in theory, a contingent debt instrument both reduces the probability a bank will incur losses and absorbs losses that do eventuate. Because of both these factors, contingent debt is expected, in theory, to reduce the risk of bank failure.  

51. Focusing on the second of these effects, management incentives, it matters whether, when the debt is written off, holders are compensated in the form of newly issued shares (“conversion”). If conversion is on such a scale as to threaten existing shareholders with a loss of control of the bank, it will be optimal for bank management to target a lower level of risk exposure for a given set of circumstances than would have been the case otherwise. For example, bank management may be less tolerant of asset volatility, and more likely to issue new equity to existing shareholders, when capital is low rather than risk triggering conversion.”

RBNZ Capital Review Paper 2: What should qualify as bank capital? Issues and Options (para 49 – 51) – Emphasis added

So the RBNZ does recognise the potential value of contingent debt instruments which convert into common equity but chose to downplay the benefits while placing much greater weight on a series of concerns it identified.

What’s in a name – The RBNZ Taxonomy of Capital

Before digging into the detail of the RBNZ concerns, it will be helpful to first clarify terminology. I am using the term Contingent Convertible Instruments for my preferred form of supplementary capital whereas much of the RBNZ paper focuses on what it refers to as “Contingent debt instruments“, which it defines in part as “debt that absorbs loss via write-off, which may or may not be followed by conversion”.

I had not picked this up on my first read of the RBNZ paper but came to realise we are talking slightly at cross purposes. The key words to note are “contingent” and “convertible”.

  • The “contingent” part of these instruments is non-negotiable if they are to be accepted as bank regulatory capital. The contingency is either a “non-viability event” (e.g. the supervisor determines that the bank must increase common equity to remain viable) or a CET1 ratio of 5.125% or less (what APRA terms a “loss absorption trigger” and the RBNZ refers to as a “going-concern trigger”)
  • “Conversion” however is optional. Loss absorption is non-negotiable for bank regulatory capital but it can be achieved in two ways. I have argued that loss absorption is best achieved by converting these capital instruments into common equity but prudential regulation is satisfied so long as the instruments are written-off.

I had taken it as given that these instruments would be convertible but the RBNZ places more emphasis on the possibility that conversion “may or may not” follow write-off. Small point but worth noting when evaluating the arguments.

Why does conversion matter?

The RBNZ understandably focuses on the write-off part of the loss absorption process whereas I focus on conversion because it is essential to preserving a loss hierarchy that allocates losses to common equity in the first instance. If we ignore for a moment the impact of bail-in (either by conversion or write-off), the order in which losses are applied to the various sources of funding employed by a bank follows this loss hierarchy:

  • Going Concern:
    • Common Equity Tier 1 (CET1)
    • Additional Tier 1 (AT1)
  • Insolvency – Liquidation or restructuring:
    • Tier 2 (T2)
    • Senior unsecured
    • Super senior
      • Covered bonds
      • Deposits
      • Insured deposits

Under bail-in, writing off a contingent capital instrument generates an increase in common equity that accrues to the existing ordinary shareholders thereby negating the traditional loss hierarchy that requires common equity to be exhausted before more senior instruments can be required to absorb loss.

Conversion is a far better way to effect loss absorption because ordinary shareholders still bear the brunt of any loss, albeit indirectly via the dilution of their shareholding (and associated share price losses). In theory, conversion shields the AT1 investors from loss absorption because they receive common equity equivalent in value to the book value of their claim on the issuer. In practice, it is less clear that the AT1 investors will be able to sell the shares received at the conversion price or better but they are still better off than if they had simply seen the value of their investment written-off. If you are interested in digging deeper, this post looks at how loss absorption works under bail-in.

The RBNZ does recognise this dynamic but still chose to reject these advantages so it is time to look at their concerns.

RBNZ concerns with contingent capital

The RBNZ identified six concerns to justify its in principle decision to exclude the use of contingent capital instruments in the NZ capital adequacy framework.

  1. Possible under-estimation of the tax effects of contingent debt
  2. Reliance on parent entities as purchasers of AT1 contingent debt
  3. Not suitable for retail investors
  4. Banks structured as mutual societies cannot offer contingent debt that includes conversion into common equity
  5. Potential for regulatory arbitrage arising from the tension between tax and capital regulation
  6. Difficulties with exercising regulatory oversight of contingent debt

I don’t imagine the RBNZ is much concerned with my opinion but I don’t find the first three concerns to be compelling. I set out my reasons later in the post but will focus for the moment on three issues that I think do bear deeper consideration. You do not necessarily have to agree with the RBNZ assessment, or the weight they assign to them, but I believe these concerns must be addressed if we are to make the case for contingent debt.

Stronger arguments against contingent debt

1) Contingent debt gives the larger, listed banks a competitive advantage over mutual societies that are unable to issue ordinary shares

The RBNZ notes that all New Zealand banks are able to issue a version of contingent debt that qualifies as capital, but that some types of banks may have access to a broader – and cheaper – range of capital opportunities than others. The current definition of capital is thus in part responsible for a somewhat uneven playing field.

The primary concern seems to be banks structured as mutual societies which are unable to issue ordinary shares. They cannot offer contingent debt that includes conversion and must rely on the relatively more expensive option of writing-off of the debt to effect loss absorption.

I think this is a reasonable concern but I also believe there may be ways to deal with it. One option is for these banks to issue Mutual Equity Interests as has been proposed in Australia. Another option (also based on an Australian proposal) is that the increased requirements for loss absorbing capital be confined to the banks which cannot credibly be allowed to fail or be resolved in any other way. I recognise that this option benefits from the existence of deposit insurance which NZ has thus far rejected.

I need to do bit more research on this topic so I plan to revisit the way we deal with small banks, and mutuals in particular, in a future post.

2) Economic welfare losses due to regulatory arbitrage opportunities in the context of contingent debt

The tax treatment of payments to security holders is one of the basic tests for determining if the security is debt or equity but contingent debt instruments don’t fall neatly into either box. The conversion terms tied to PONV triggers make the instruments equity like when the issuer is under financial stress while the contractual nature of the payments to security holders makes them appear more debt like under normal operating conditions.

I can see a valid prudential concern but only to the extent the debt like features the tax authority relied on in making its determination regarding tax-deductibility somehow undermined the ability of the instrument to absorb loss when required.

There have been instances where securities have been mis-sold to unsophisticated investors (the Monte dei Paschi di Sienna example cited by the RBNZ is a case in point) but it is less obvious that retail investment by itself is sufficient cause to rule out this form of capital.

The only real difference I see over conventional forms of debt is the line where their equity like features come into play. Conventional debt is only ever at risk of loss absorption in the event of bankruptcy where its seniority in the loss hierarchy will determine the extent to which the debt is repaid in full. These new forms of bank capital bring forward the point at which a bank balance sheet can be restructured to address the risk that the restructuring undermines confidence in the bank. The economics of the restructuring are analogous so long as losses are allocated by conversion rather than by write-off alone.

3) Difficulties experienced with the regulatory oversight of contingent debt

Possibly their core concern is that overseeing instrument compliance is a complex and resource-intensive process that the RBNZ believes does not fit well with its regulatory model that emphasises self-discipline and market discipline. The RBNZ highlights two concerns in particular.

  • Firstly the RBNZ has chosen to respond to the challenge of vetting these instruments by instituting a “non-objection process” that places the onus on issuers to confirm that their instruments comply with the capital adequacy requirements.
  • Secondly, notwithstanding the non objection process, the added complexity of the instruments relative to common equity, still requires significant call on prudential resources.

This I think, is the strongest objection the RBNZ raises against contingent debt. Contingent debt securities are clearly more complex than common equity so the RBNZ quite reasonably argues that they need to bring something extra to the table to justify the time, effort and risk associated with them. There is virtually no justification for them if they do, as the RBNZ asserts, work against the principles of self and market discipline that underpin its regulatory philosophy.

Three not so compelling reasons for restricting the use of contingent capital instruments (“in my humble opinion’)

1) Possible under-estimation of the tax effects of contingent debt

The first concern relates to the RBNZ requirement that banks must acknowledge any potential tax implications arising from contingent debt and reflect these potential “tax offsets” in the reported value of capital. Banks are required to obtain a binding ruling from the NZ tax authority (or voluntarily take a tax ”haircut”). The RBNZ acknowledges that a binding ruling can provide comfort that tax is fully accounted for under prudential requirements, but quite reasonably argues that this will only be the case if the ruling that is sought is appropriately specified so as to capture all relevant circumstances.

The RBNZ’s specific concern seems to be what happens when no shares are issued in the event of the contingent loss absorption feature being triggered and hence no consideration is paid to investors in exchange for writing off their debt claim. The bank has made a gain that in principle would create a tax lability but it also seems reasonable to assume that the write off could only occur if the bank was incurring material losses. It follows then that the contingent tax liability created by the write off is highly likely to be set off against the tax losses such that there is no tax to pay.

I am not a tax expert so I may well be missing something but I can’t see a practical risk here. Even in the seemingly unlikely event that there is a tax payment, the money represents a windfall gain for the public purse. That said, I recognise that the reader must still accept my argument regarding the value of having the conversion option to consider it worth dealing with the added complexity.

2) A reliance on parent entities as purchasers of AT1 contingent debt

I and the RBNZ both agree that one of the key planks in the case for accepting contingent debt as bank capital is the beneficial impact on bank risk taking generated by the risk of dilution but the RBNZ argues this beneficial impact is less than it could be when the instrument is issued by a NZ subsidiary to its publicly listed parent.

I may be missing something here but the parent is exposed to dilution if the Non-Viability or Going Concern triggers are hit so I can’t see how that reduces the incentive to control risk unless the suggestion is that NZ management will somehow have the freedom to pursue risky business strategies with no input from their ultimate owners.

3) Retail investors have acquired contingent debt

The RBNZ cites some statistical evidence that suggests that, in contrast to the experience overseas, there appears to be limited uptake by wholesale investors of contingent debt issued by the big four banks. This prompts them to question whether the terms being offered on instruments issued outside the parent group are not sufficiently attractive for sophisticated investors. This concern seems to be predicated on the view that retail will always be the least sophisticated investors so banks will seek to take advantage of their relative lack of knowledge.

It is arguably true that retail investors will tend be less sophisticated than wholesale investors but that should not in itself lead to the conclusion that any issue targeted at retail is a cynical attempt at exploitation or that retail might legitimately value something differently to the way other investors do. The extent that the structures issued by the Australian parents have thus far concentrated on retail, for example, might equally be explained by the payment of franking credit that was more highly valued by the retail segment. Offshore institutions might also have been negative on the Australian market therefore pushing Australian banks to focus their efforts in the domestic market.

I retain an open mind on this question and need to dig a bit deeper but I don’t see how the fact that retail investment dominates the demand for these structures at a point in time can be construed to be proof that they are being mis-sold.

The RBNZ’s answer ultimately lies in their regulatory philosophy

The reason that the RBNZ rejects the use of these forms of supplementary capital ultimately appears to lie in its regulatory philosophy which is based on the following principles

  • Self discipline on the part of the financial institutions they supervise
  • Market discipline
  • Deliberately conservative
  • Simplicity

The RBNZ also acknowledges the value of adopting BCBS consistent standards but this is not a guiding principle. It reserves the right to adapt them to local needs and, in particular, to be more conservative. It should also be noted that the RBNZ has quite deliberately rejected adopting deposit insurance on the grounds (as I understand it) that this encourages moral hazard. They take this a step further by foregoing any depositor preference in the loss hierarchy and by a unique policy of Open Bank Resolution (OBR) under which deposits are explicitly included in the liabilities which can be written down in need to assist in the recapitalisation of an insolvent bank.

In theory, the RBNZ might have embraced contingent convertible instruments on the basis of their consistency with the principles of self and market discipline. The threat of dilution via conversion of the instrument into common equity creates powerful incentives not just for management to limit excessive risk taking but also for the investors to exert market discipline where they perceive that management is not exercising self-discipline.

In practice, the RBNZ seems to have discounted this benefit on the grounds that that there is too much risk, either by design or by some operational failure, that these instruments might not convert to common equity. They also seem quite concerned with structures that eschew conversion (i.e. loss absorption effected by write-off alone) but they could have just excluded these instruments rather than a blanket ban. Having largely discounted or disregarded the potential benefit, the principles of deliberate conservatism and simplicity dictate their proposed policy position, common equity rules.

Summing up

This post only scratches the surface of this topic. My key point is that contingent convertible capital instruments potentially add something useful to the bank capital management toolkit compared to relying entirely on common equity. The RBNZ acknowledge the potential upside but ultimately argue that the concerns they identify outweigh the potential benefits. I have reviewed their six concerns in this post but need to do a bit more work to gain comfort that I am not missing something and that my belief in the value of bail-in based capital instruments is justified.

Tony

How much capital is enough? – The NZ perspective

The RBNZ has delivered the 4th instalment in a Capital Review process that was initiated in March 2017 and has a way to run yet. The latest consultation paper addresses the question “How much capital is enough?”.  The banking industry has until 29 March 2019 to respond with their views but the RBNZ proposed answer is:

  • A Tier 1 capital requirement of 16% of RWA for systemically important banks and 15% of RWA for all other banks
  • The Tier 1 minimum requirement to remain unchanged at 6% (with AT1 capital continuing to be eligible to contribute a maximum of 1.5 percentage points)
  • The proposed increased capital requirement to be implemented via an overall prudential capital buffer of 9-10% of RWA comprised entirely of CET1 capital;
    • Capital Conservation Buffer 7.5% (currently 2.5%)
    • D-SIB Buffer 1.0% (no change)
    • Counter-cyclical buffer 1.5% (currently 0%)

The increase in the capital ratio requirement is proposed to be supplemented with a series of initiatives that will increase the RWA of IRB banks:

  • The RBNZ proposes to 1) remove the option to apply IRB RW to sovereign and bank exposures,  2) increase the IRB scalar (from 1.06 to 1.20) and 3) to introduce an output floor set at 85% of the Standardised RWA on an aggregate portfolio basis
  • As at March 2018, RWA’s produced by the IRB approach averaged 76% of the Standardised Approach and the RBNZ estimate that the overall impact will be to increase the aggregate RWA to 90% of the outcome generated by the Standardised approach (i.e. the IRB changes, not the output floor, drive the increase in RWA)
  • Aggregate RWA across the four IRB banks therefore increases by approximately 16%, or $39bn, compared to March 2018 but the exact impact will depend on how IRB banks respond to the higher capital requirements

The RBNZ has also posed the question whether a Tier 2 capital requirement continues to be relevant given the substantial increase in Tier 1 capital.

Some preliminary thoughts …

There is a lot to unpack in this paper so this post will only scratch the surface of the issues it raises …

  • The overall number that the RBNZ proposes (16%) is not surprising.It looks to be at the lower end of what other prudential regulators are proposing in nominal terms
  • But is in the same ball park once you allow for the substantial increase in IRB RWA and the fact that it is pretty much entirely CET1 capital
  • What is really interesting is the fundamentally different approach that the RBNZ has adopted to Tier 2 capital and bail-in versus what APRA (and arguably the rest of the world) has adopted
    • The RBNZ proposal that the increased capital requirement take the form of CET1 capital reflects its belief that “contingent convertible instruments” should be excluded from what counts as capital
    • Exactly why the RBNZ has adopted this position is a complex post in itself (their paper on the topic can be found here) but the short version (as I understand it) is that they think bail-in capital instruments triggered by non-viability are too complex and probably won’t work anyway.
    • Their suggestion that Tier 2 probably does not have a role in the capital structure they have proposed is logical if you accept their premise that Point of Non-Viability (PONV) triggers and bail-in do not work.
  • The RBNZ highlight a significantly enhanced role for prudential capital buffersI am generally in favour of bigger, more dynamic, capital buffers rather than higher fixed minimum requirements and I have argued previously in favour of the base rate for the counter-cyclical being a positive value (the RBNZ propose 1.5%)
    • But the overall size of the total CET1 capital buffer requirement requires some more considered thought about 1) the role of bail-in  structures and PONV triggers in the capital regulation toolkit (as noted above) and 2) whether the impacts of the higher common equity requirement will be as benign as the RBNZ analysis suggests
  • I am also not sure that the indicative capital conservation responses they have outlined (i.e. discretionary distributions limited to 60% of net earnings in the first 250bp of the buffer, falling to 30% in the next 250bp and no distributions thereafter) make sense in practice.
    • This is because I doubt there will be any net earnings to distribute if losses are sufficient to reduce CET1 capital by 250bp so the increasing capital conservation requirement is irrelevant.
  • Last, but possibly most importantly, we need to consider the impact on the Australian parents of the NZ D-SIB banks and how APRA responds. The increase in CET1 capital proposed for the NZ subsidiaries implies that, for any given amount of CET1 capital held by the Level 2 Banking Group, the increased strength of the NZ subsidiaries will be achieved at the expense of the Australian banking entities
    • Note however that the impact of the higher capital requirement in NZ will tend to be masked by the technicalities of how bank capital ratios are calculated.
      • It probably won’t impact the Level 2 capital ratios at all since these are a consolidated view of the combined banking group operations of the Group as a whole
      • The Level 1 capital ratios for the Australian banks also treat investments in bank subsidiaries relatively generously (capital invested in unlisted subsidiaries is treated as a 400% risk weighted asset rather than a capital deduction).

Conclusion

Overall, I believe that the RBNZ is well within its rights to expect the banks it supervises to maintain a total level of loss absorbing capital of 16% or more. The enhanced role for capital buffers is also a welcome move.

The issue is whether relying almost entirely on CET1 capital is the right way to achieve this objective. This is however an issue that has been debated for many decades with no clear resolution. It will take some time to fully unpack the RBNZ argument and figure out how best to articulate why I disagree. In the interim, any feedback on the issues I have outlined above would be most welcome.

Tony

Loss absorption under bail-in

I recently did a post on a Discussion Paper setting out how APRA proposes to increase the Loss Absorption Capital (LAC) of Australian authorised deposit-taking institutions (ADIs). I came down on the side of this being a desirable (arguably necessary) enhancement of the Australian financial system but noted that the devil was in the detail. One of the issues discussed was the potential impact of the proposal on the statutory and contractual loss hierarchy that defines the sequence in which losses are absorbed by the capital of the bank in the first instance, and by more senior sources of funding in need.  

This post attempts to dig a bit deeper into this question to better understand how losses would be assigned under a bail-in scenario. It is a pretty technical point and possibly of limited interest but I wanted to make sure I had a good handle on how loss absorption plays out in the future. Read on or stop here.

Key points

  • The bail-in of selected, pre-positioned liabilities modifies the traditional loss hierarchy that applies in a liquidation scenario 
    • As a general rule, the absorption of losses is accelerated across all tiers of LAC
    • CET1 investors bear the loss via the dilution of their shareholdings as AT1 and Tier 2 are converted to common equity
    • AT1 investors risk not receiving distributions but otherwise the loss hierarchy between them and T2 investors seems to collapse once their holdings are converted into CET1
    • The only potential advantage to Tier 2 in these scenarios is that these instruments may only face partial conversion but how beneficial depends on the extent to which conversion to common equity offers a better chance to liquidate their holding versus selling the Tier 2 instrument itself into what is likely to be a very illiquid market
  • This has been increasingly true since APRA introduced Point of Non-Viability (PONV) conversion triggers in 2013, and the instruments without this contractual feature progressively matured, but the proposed expansion of the pool of LAC takes us further down this path:
    • partly by virtue of making it easier for APRA to restructure bank capital structures without recourse to taxpayer support (i.e. the odds of bail-in being used in a future crisis are increased if the tool itself is more effective); and
    • partly by increasing the quantum of CET1 dilution that is the mechanism by which losses are allocated to the various tiers of LAC
  • Investors in the various capital tiers will obviously adjust the return they require for the risks they are asked to bear but we should ensure we all have a clear and consistent understanding of how the loss hierarchy is modified, and whether the resulting loss hierarchy is desirable (or indeed equitable)
  • The answer to this question turns in part on whether the outcomes for AT1 and T2 investors are better or worse than the market value they could achieve if they sold their investments prior to bail-in 

Loss Hierarchy – the simple version

Prudential Standard APS 111 (Capital Adequacy: Measurement of Capital) defines the order of seniority amongst the three tiers of prudential capital:

  • CET1 Capital “… rank behind the claims of depositors and other more senior creditors in the event of a winding up of the issuer ” (Para 19 (d))
  • AT1 Capital “… rank behind the claims of depositors and other more senior creditors in the event of a winding up of the issuer” (Para 28 (c))
  • Tier 2 Capital “represents, prior to any conversion to Common Equity Tier 1 … the most subordinated claim in liquidation of the issuer after Common Equity Tier 1 Capital instruments and Additional Tier 1 Capital instruments (Attachment H, Para 1 (b))

APS 111 (Attachment F, Para 10) also explicitly allows AT1 instruments to 1) differentiate as to whether the instrument is required to convert or be written-off in the first instance, and 2) provide for a ranking under which individual AT1 instruments will be converted or written-off. The guidance on Tier 2 is less explicit on this point but there does not seem to be any fundamental reason why a bank could not introduce a similar ranking within the overall level of subordination. I am not aware of any issuer using this feature for either AT1 or T2.

If we ignore for a moment the impact of bail-in (either by conversion or write-off), the order in which losses are applied to the various sources of funding employed by a company follows this loss hierarchy:

  • Going Concern:
    • Common Equity Tier 1 (CET1)
    • Additional Tier 1 (AT1)
  • Insolvency – Liquidation or restructuring:
    • Tier 2 (T2)
    • Senior unsecured
    • Super senior
      • Covered bonds
      • Deposits
      • Insured deposits

CET1 is clearly on the front line of loss absorption (a perpetual commitment of funding with any returns subject to the issuer having profits to distribute and the Capital Conservation Ratio (CCR) not being a constraint). AT1 is subject to similar restrictions, though its relative seniority does offer some protection regarding the payment of regular distributions.

Traditionally, the claims the other forms of funding have on the issuer are only at risk in the event of the liquidation or restructuring of the company but bail-in modifies this traditional loss hierarchy.

What happens to the loss hierarchy under bail in?

First up, let’s define bail-in …

A bail-in is the rescue of a financial institution that is on the brink of failure whereby creditors and depositors take a loss on their holdings. A bail-in is the opposite of a bailout, which involves the rescue of a financial institution by external parties, typically governments that use taxpayers money.” (Investopedia)

Investopedia’s definition above is useful, albeit somewhat generic. Never say never, but the loss hierarchy employed in Australia, combined with the fact that there are substantial layers of more junior creditors for big banks in particular, means that most Australian depositors (even the ones that do not have the benefit of deposit insurance) are pretty well insulated from bail-in risk. Not everyone would share my sanguine view on this question (i.e. the limited extent to which deposits might be bailed in) and some countries (NZ for example) quite explicitly choose to forego deposit insurance and move deposits up the loss hierarchy by ranking them equally with senior unsecured creditors.

The main point of bail-in is that existing funding is used to recapitalise the bank, as opposed to relying on an injection of new capital from outside which may or may not be forthcoming. It follows that pre-positioning sufficient layers of loss absorption, and making sure that investors understand what they have signed up for, is critical.

AT1 has always been exposed to the risk of its distributions being cut. This sounds good in theory for loss absorption but the size of these potential capital outflows is relatively immaterial in any real stress scenario. It could be argued that every dollar helps but my view is that the complexity and uncertainty introduced by making these distributions subject to the Capital Conservation Ratio (CCR) outweigh any contribution they might make to recapitalising the bank. The people who best understand this point are those who have had to calculate the CCR in a stress scenario (you have to get into the detail to understand it). The CCR issue could be addressed by simplifying the way it is calculated and I would argue that simplicity is always a desirable feature of any calculation that has to be employed under conditions of stress and uncertainty. The main point however is that it does very little to help recapitalise the bank because the heavy lifting in any really severe stress scenario depends on the capacity to convert a pool of pre-positioned, contingent capital into CET1.

APRA has had explicit power to bail-in AT1 and T2 since the January 2013 version of APS 111 introduced Point of Non-Viability (PONV) conversion triggers – these enhanced powers do a few things:

  • The impact of losses is brought forward relative what would apply in a conventional liquidation or restructuring process
  • For CET1 investors, this accelerated impact is delivered via the dilution of their shareholdings (and associated share price losses)
  • In theory, conversion shields the AT1 investors from loss absorption because they receive common equity equivalent in value to the book value of their claim on the issuer
  • In practice, it is less clear that the AT1 investors will be able to sell the shares at the conversion price or better, especially if market liquidity is adversely impacted by the events that called the viability of the issuer into question
  • The conversion challenge will be even greater to the extent that T2 investors are also bailed-in and seek to sell the shares they receive

Tier 2 will only be bailed-in after AT1 bail-in has been exhausted, as would be expected given its seniority in the loss hierarchy, but it is hard to see a bail-in scenario playing out where the conversion of AT1 alone is sufficient to restore the viability of the bank. AT1 is likely to represent not much more than the 1.5 percentage points of RWA required to meet minimum requirements but any crisis sufficient to threaten the viability of a bank is likely to require a much larger recapitalisation so full or partial conversion of T2 should be expected.

Partial conversion 

Attachment J – Para 6 provides that “Conversion or write-off need only occur to the extent necessary to enable APRA to conclude that the ADI is viable without further conversion or write-off”. Para 8 of the same attachment also specifies that “An ADI may provide for Additional Tier 1 Capital instruments to be converted or written off prior to any conversion or write-off of Tier 2 Capital instruments”.

This makes it reasonably clear that APRA will not automatically require all AT1 and Tier 2 to be converted or written-off but the basis on which partial conversion would be applied is not covered in the discussion paper. A pro-rata approach (i.e. work out how much of the aggregate Tier 2 is required to be converted and then apply this ratio to each  individual instrument) seems the simplest option and least open to legal challenge but it may be worth considering alternatives.

Converting the Tier 2 instruments closest to maturity in particular seems to offer some advantages over the pro rata approach

  • It generates more CET1 capital than the Tier 2 foregone (because the Tier 2 capital value of an instrument is amortised in its final 5 years to maturity whereas the CET1 capital created by bail-in is the full face value off the instrument)
  • It defers the need to replace maturing Tier 2 capital and maximises the residual pool of LAC post bail-in.

What is the reason for the 20% floor that APS 111 imposes on the conversion price?

The transition to a bail-in regime may be an opportune time to revisit the rationale for placing a floor on the conversion price used to convert AT1 and Tier 2 into common equity. Attachments E and F contain an identically worded paragraph 8 that requires that the share price used to calculate the shares received on conversion cannot be less than 20% of the ordinary share price at the the time the LAC instrument was issued. This floor arguably requires the share price to fall a long way before it has any effect but it is not clear what purpose is served by placing any limit on the extent to which common equity shareholders might see their holdings diluted in a non-viability scenario.

Bail-in via write-off of AT1 or T2

I am concentrating on bail-in via conversion because that seems to be the default loss absorption contemplated by APS 111 and the one that is most consistent with the traditional loss hierarchy. LAC instruments can be designed with write-off as the primary loss absorption mechanism but it is not clear that any issuer would ever choose to go down that path as it would likely be more expensive versus bail-in via conversion. The write-off option seems to have been included as a failsafe in the event that conversion is not possible for whatever reason.

Conclusion

The loss absorption hierarchy under a bail-in based capital regime is a bit more complicated than the simple, progressive three tier hierarchy that would apply in a traditional liquidation scenario. I believe however that this added complexity is justified both by the enhanced level of financial safety and by the extent to which it addresses the advantage big banks have previously enjoyed by virtue of being Too Big To Fail.

The main concern is that AT1 and Tier 2 investors who underwrite the pre-positioning of this contingent source of new CET1 capital properly understand the risks. I must confess that I had to think it through and remain open to the possibility that I have missed something … if so tell me what I am missing.

Tony

 

Mortgage risk weights – fact check

It is frequently asserted that the major Australian banks have been “gifted” a substantially lower mortgage risk weight than the smaller banks. To be precise, the assertion is that the major banks are only required to hold capital based on a 25% risk weight versus 39% for smaller banks.

If you are not familiar with the arcane detail of bank capital adequacy, then you could be forgiven for concluding that this differential (small banks apparently required to hold 56% more capital for the same risk) is outrageous and unfair. While the risk weights for big banks are certainly lower on average than those required of small banks, I believe the difference in capital requirements is not as large as the simple comparison of risk weights suggests.

Bank capital requirements involve more than risk weights

To understand why this comparison of risk weights is misleading, it will be helpful to start with a quick primer on bank capital requirements. The topic can be hugely complex but, reduced to its essence, there are three elements that drive the amount of capital a bank holds:

  1. The risk weights applied to its assets
  2. The target capital ratio applied to those risk weighted assets
  3. Any capital deductions required when calculating the capital ratio

Problem 1 – Capital adequacy ratios differ

The comparison of capital requirements based on risk weights implicitly assumes that the regulator applies the same capital ratio requirement to all banks, but this is not the case. Big banks are targeting CET1 ratios based on the 10.5% Unquestionably Strong benchmark set by APRA while there is a greater range of practice amongst the smaller banks. Bendigo and Suncorp appear to be targeting a CET1 ratio in the range of 8.5 to 9.0% while the smaller of the small banks appear to be targeting CET1 ratios materially higher (say 15% or more).

If we confine the comparison to the alleged disadvantage suffered by Bendigo and Suncorp, then the higher risk weights they are required to apply to residential mortgages is substantially offset by the lower CET1 target ratios that they target (the 56% difference in capital required shrinks to something in the order of 30% if you adjust for the difference in target CET1 ratios).

Broadening the comparison to the smaller banks gets even more interesting. At face value the much higher CET1 ratios they appear to target suggest that they are doubly penalised in the required capital comparison but you have to ask why are they targeting such high CET1 ratios. One possible explanation is that the small less diversified mortgage exposures are in fact more risky than the more diversified exposures maintained by their larger competitors.

Problem 2 – You have to include capital deductions

This is quite technical I recognise but, in addition to the capital tied to the risk weight, the big banks are also required to hold capital for a capital deduction linked to the difference between their loan loss provisions and a regulatory capital value called “Regulatory Expected Loss”. This capital deduction increases the effective risk weight. The exact amount varies from bank to bank but I believe it increases the effective capital requirement by 10-12% (I.e. an effective RW closer to 28%). My understanding is that small banks are not required to make the same capital deduction.

Problem 3 – The Standardised risk weights for residential mortgages seem set to change

A complete discussion of the RW difference should also take account of the fact that APRA has proposed to introduce lower RW Categories for the smaller banks such their average RW may be lower than 39% in the future. I don’t know what the average RW for small banks would be under these new RW but that is a question you could put to the banks who use the 39% figure without acknowledging this fact.

Problem 4 – The risk of a mortgage depends on the portfolio not the individual loan

The statement that a loan is the same risk irrespective of whether it is written by a big bank or small bank sounds intuitively logical but is not correct. The risk of a loan can only be understood when it is considered as part of the portfolio the bank holds. Small banks will typically be less diversified than a big bank.

Problem 5 – What about the capital required for Interest Rate Risk in the Banking Book (IRRBB)?

I don’t have sufficient data to assess how significant this is, but intuitively I would expect that the capital that the major banks are required to hold for IRRBB will further narrow the effective difference between the risk weights applied to residential mortgages.

Summing up

My aim in this post was not to defend the big banks but rather to try to contribute some of the knowledge I have acquired working in this area to what I think is an important but misunderstood question. In the interests of full disclosure, I have worked for one of the large Australian banks and may continue to do work for them in the future.

On a pure risk basis, it seems to me that the loan portfolio of a large bank will tend to be more diversified, and hence lower risk, than that of a smaller bank. It is not a “gift” for risk weights to reflect this.

There is a legitimate debate to be had regarding whether small banks should be given (gifted?) an advantage that helps them compete against the big banks. That debate however should start with a proper understanding of the facts about how much advantage the large banks really have and the extent to which their lower risk weights reflect lower risk.

If you disagree tell me what I am missing …

Capital adequacy – an option to add transparency and flexibility into the “Unquestionably Strong” mix

Two of my earlier posts (here and here) discussed the potential to improve the capital adequacy framework by revisiting the calibration and operation of regulatory capital buffers. Some of the issues discussed in those posts are touched on in a discussion paper APRA has released titled “Improving the transparency, comparability and flexibility of the ADI capital framework“.

APRA is open to alternatives but sets out two options for discussion

In APRA’s words, the DP outlines

“… options to modify the ADI capital framework to improve transparency and comparability of reported capital ratios. The main conceptual approaches APRA is considering and seeking feedback on are:

  • developing more consistent disclosures without modifying the underlying capital framework; and

  • modifying the capital framework by adjusting the methodology for calculating capital ratios.”

The First Approach– “Consistent disclosure” – seems to be a beefed up version of the status quo in which APRA gets more directly involved in the comparability process by adding its imprimatur to the internationally harmonised ratios some Australian banks currently choose to disclose as an additional informal measure of capital strength.

“Under this approach, ADIs would continue to determine regulatory capital ratios using APRA’s definitions of capital and RWA. However, APRA would also specify a methodology for ADIs to determine certain adjustments to capital and RWA that could be used for disclosure (Pillar 3) purposes. As noted above, the methodology would focus on aspects of relative conservatism that are material in size and able to be calculated simply and objectively.”

APRA argues that “The supplementary disclosure would allow all stakeholders to better assess the capital strength of an ADI on a more comparable basis. However, it would result in two APRA-endorsed capital ratios: an APRA regulatory capital ratio to be compared against minimum requirements, and an additional disclosure-only capital ratio for, in particular, international comparison.”

A Second Approach – “Capital ratio adjustments” would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA.

The DP explains that this “… alternative approach would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA. This would involve removing certain aspects of relative conservatism from ADIs’ capital ratio calculations and lifting minimum regulatory capital ratio requirements in tandem. This increase in regulatory capital ratio requirements could be in the form of a transparent adjustment to minimum capital ratio requirements—for the purposes of this paper, such an adjustment is termed the ‘APRA Overlay Adjustment’.”

“To maintain overall capital adequacy, the APRA Overlay Adjustment would need to be calculated such that the total dollar amount of Prudential Capital Requirement (PCR) and Capital Conservation Buffer (CCB) would be the same as that required if these measures were not adopted. In other words, the risk-based capital requirements of ADIs would be unchanged in absolute dollar terms, maintaining financial safety, but adjustments to the numerator and the denominator of the capital ratio to be more internationally comparable would increase reported capital ratios.”

APRA clarify that

“These options are not mutually exclusive, and there is potential for both approaches to be adopted and applied in different areas.”

Approach 2 has some clear advantages …

It would make the “unquestionably strong” capital ratios more directly comparable with international peers, thereby reducing the potential for the perception of this strength to be obscured or misunderstood.

“Perception” is the key word here. It matters that the strength of Australian banks is simple, transparent and evident rather than being something where the perceivers must understand a sequence of relatively obscure and complex adjustments to fully appreciate the strength of a bank’s capital. More importantly perception matters most when the system is under stress and people do not have the time, or the inclination, to look beyond the reported numbers.

The adjusted capital ratio approach also provides opportunity to increase the flexibility of the ADI capital framework in times of stress but only to the extent to which the Overlay Adjustment is applied to the capital buffer, rather than increasing the minimum capital requirements. Higher minimum requirements would do nothing to enhance flexibility and may even be a backward step.

I believe a non zero baseline for the CCyB would also enhance the flexibility of the capital framework by virtue of the fact that it improves the odds that the banks (and APRA) have a flexible buffer in place before it is needed. This opportunity for enhanced flexibility is an option under both approaches so long as the Unquestionably Strong Benchmark maintains a material surplus over the Capital Conservation Buffer as it currently does.

But also some challenges …

APRA notes that the Adjusted Capital Ratio approach:

  • May significantly increase operational complexity for ADIs by virtue of the fact that the application of the APRA Overlay Adjustment would result in variable capital ratio requirements,

• Potentially results in variable minimum capital requirements which introduces complexity in analysing capital buffers and may undermine the desired transparency, and

• Reduces the dollar value of the 5.125 per cent (of RWA) loss absorption trigger point.

Do the advantages of the Adjusted Capital Ratio approach outweigh the challenges?

The short answer, I think, is yes … albeit with some qualifications.

So far as I can see, the added complexity only enters the discussion to the extent that some of the APRA Overlay Adjustment is applied to increase the minimum capital requirement. Most, if not all, of the operational complexity is avoided if the “Overlay Adjustment” is confined to increasing the size of the capital buffer.

Conversely, the benefits of increased responsiveness (or risk sensitivity) and flexibility lie in an increased capital buffer.

It follows then that the best way to pursue this approach is for any harmonised adjustments to the reported capital ratio to be confined to a higher CCB. This begs the question whether all the Overlay Adjustment should be applied to the capital buffer. I address that question in my responses below to some of the questions APRA has posed to solicit industry feedback.

One issue not covered in the Discussion Paper in any detail is that the capital ratios under Approach 2 will be more sensitive to any changes in the numerator. This is a simple mathematical consequence of RWA being lower if more harmonised measures are adopted. I do not see this as a problem but the heightened risk sensitivity of the framework needs to be clearly understood beforehand to minimise the potential for larger changes in capital ratios to be misunderstood. A more risk sensitive capital ratio may even be an advantage. This may not be obvious but there is a body of research which suggests a more responsive, more volatile, measure of capital adequacy can be beneficial to the extent that it prompts greater risk awareness on the part of bank management and external stakeholders. Greg Ip’s book “Foolproof” offers an introduction to some of this research but a simple example illustrating the point is the way that the benefits of improved braking in modern cars is offset to some extent by people driving faster.

APRA concludes its discussion paper with some direct questions.

There are 8 questions in all but in the context of this post I will have a go at addressing 3 of them, questions 2, 7 and 8.

Question 2: If APRA were to apply a combination of Approach 1 and Approach 2, which aspects of relative conservatism are best suited to be treated under Approach 2?

If you accept the argument that the minimum capital requirement should continue to be a set value (i.e. not subject to periodic adjustment), then the aspects of relative conservatism best suited to Approach 2 are those which can reasonably be assigned to an increase in, and regular adjustment of, one or more of the capital buffers.

Running through the list of adjustments currently applied to generate the internationally harmonised capital ratios, we can distinguish three broad categories of APRA conservatism:

  1. The extra credit risk related losses a bank might expect to experience under a very severe recession or financial crisis style scenario but not necessarily a gone concern where losses extend into the tail of the loss distribution
  2. Assets whose value depends on the ADI being a going concern and consequently are less certain to be realisable if the bank is in liquidation or has otherwise reached a point of non-viability
  3. Capital deduction intended to avoid “double counting” capital invested outside the ADI include

There are very few areas of black and white in the response to this question, but the first group are the items of APRA conservatism that I think have the clearest claim to be included in the capital buffer. These reflect potential loss scenarios that are conservative but still within the domain of plausibly severe downturns in the business cycle; this would encompass the following capital ratio adjustments:

  • the 20 per cent loss given default (LGD) portfolio constraint required for residential mortgage exposures;
  • the LGD parameter for unsecured non-retail exposures;
  • credit conversion factors (CCFs) for undrawn non-retail commitments;
  • use of supervisory slotting and the scaling factor for specialised lending;
  • risk weights for other retail exposures covered by the standardised approach to credit risk; and
  • the exchange rate used to convert Euro-denominated thresholds in the Basel capital framework into Australian dollars.

The second category are assets which have a value if the bank is a going concern but cannot necessarily be relied upon in non viability scenarios; I.e.

  • deferred tax assets arising from timing differences;
  • capitalised expenses and transaction costs
  • the capital requirement applied by APRA for IRRBB (I am open to arguments that I am being too harsh on IRRBB)

The third category captures capital that is committed to risks where the bank is taking a first loss exposure including

  • investments in other financial institutions;
  • holdings of subordinated tranches of securitisations.
  • investments in commercial entities;

Another way to explore this question is to map these three categories to the traditional graphic expression of a bank loss distribution and establish whether they are expected to lie:

  • closer to the middle of the loss distribution (sometimes framed as a 1 in 25 year downturn or the kinds of losses we expect in a severe downturn)
  • Or closer to the “tail” of the loss distribution (typically expressed as a 1 in 1000 year loss in regulatory capital terms).

To be clear, I am not seeking to ascribe any level of precision to these statistical probabilities; simply to distinguish between the relative likelihood of the items of conservatism that APRA has embedded in its current measure of capital adequacy. These three items tend to be treated as equivalent under the current approach and enhanced disclosure per Approach 1 will do nothing to address this conflation of risks.

Question 7: Would increasing the size of capital buffers (either by increasing the CCB or by setting a non-zero baseline CCyB) relative to PCR appropriately balance capital strength with financial stability through the cycle?

I have advocated the benefits of a non zero baseline CCYB in previous posts. One of these posts focused on the approach adopted by the Bank of England where I identified two advantages.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable counter cyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The BOE approach still monitors a range of core indicators for the CCyB grouped into three categories.

• The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.

• The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.

• The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the BOE implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the standard BCBS approach. The BOE might still miss the turning point but it has a head start on the problem if it does.

The BOE also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

I have discussed the BOE approach at length but the Canadian supervisor has also introduced some interesting innovations in the way that it uses a capital buffer to address the systemic risk of large banks that are worth considering as part of this review.

The second reason I favour a non zero baseline is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate.

In theory, the standard capital conservation buffer (CCB) introduced under Basel III can absorb any unexpected increase in losses and allow banks the time to progressively rebuild the buffer when economic conditions improve

In practice, the upper boundary of the CCB acts as a de facto minimum requirement such that banks face strong market pressure to immediately rebuild the buffer potentially at material cost to shareholders

There are no guarantees for what happens to banking systems under stress, but a flexible buffer that is sensitive to the state of the credit cycle is I think far more fit for purpose.

It is important to note that a non zero CCYB is an option under both approaches. There is potentially enough surplus capital in the Unquestionably Strong calibration for a non-zero CCYB to be introduced without requiring banks to raise any more capital. This would be so under either of the approaches that APRA has outlined.

So a larger buffer would be desirable from the perspective of increased comparability and transparency but the advantages of a non zero CCYB could also be pursued under the Unquestionably Strong status quo or Approach 1.

Question 8: What may be some of the potential impacts if APRA increases the prescribed loss absorption trigger point above 5.125 per cent of RWA?

The rationale for increasing the PONV Trigger is that APRA believes it is important to preserve the value of the trigger in dollar terms.

I can see that it is important to have a PONV trigger well before a bank reaches the point of insolvency (I.e. where liabilities exceed assets).

It is less clear that the reduction in the dollar value of the trigger point is sufficiently material to matter.

What really matters is the amount of contingent capital available to be converted into common equity if the PONV conversion trigger is pulled.

In the absence of this source of new capital, the fact that a bank has X billion dollars more or less of book equity (according to the financial accounts) at the point of deemed non-viability is arguably irrelevant to whether it remains a going concern.

I am also pretty sure that we do not want the operational complexity associated with a PONV trigger that moves around over time as a result of seeking to compensate for the impact of the Overlay Adjustment on capital deductions and RWA.

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

Looking under the hood – The IRB formula

This post is irredeemably technical so stop here if that is not your interest. If you need to understand some of the mechanics of the formula used to calculate credit risk weighted assets under the advanced Internal Ratings Based (IRB) approach then the BCBS published a paper in 2005 which offers an explanation:

  • describing the economic foundations
  • as well as the underlying mathematical model and its input parameters.

While a lot has changed as a result of Basel III, the models underlying the calculation of Internal Rating Based Capital (IRB) requirements are still based on the core principles agreed under Basel II that are explained in this BCBS paper.

The notes in the linked page below mostly summarise the July 2005 paper with some emphasis (bolded text) and comments (in italics) that I have added. The paper is a bit technical but worth reading if you want to understand the original thinking behind the Basel II risk weights for credit risk.

I initially found the paper useful for revisiting the foundation assumptions of the IRB framework as background to considering the regulatory treatment of Expected Loss as banks transition to IFRS9. The background on how the RW was initially intended to cover both Expected and Unexpected Loss, but was revised such that capital was only required to cover Unexpected Loss, is especially useful when considering the interaction of loan loss provisioning with capital requirements.

Reading the BCBS paper has also been useful for thinking through a range of related issues including:

  • The rationale for, and impact of, prudential conservatism in setting the risk parameters used in the IRB formula
  • The cyclicality of a risk sensitive capital requirement (and potential for pro cyclicality) and what might be done to mitigate the risk of pro-cyclical impacts on the economy

If you have read this far then my summary of the BCBS paper and my comments /observations can be found here (and thank you).

I am not a credit risk model expert, so the summary of the paper and my comments must be read with that in mind. I did this to help me think through some of the issues with bank capital adequacy. Hopefully others will find the notes useful. If you see something wrong or something you disagree with then let me know.