Why the real economy needs a prudential authority too

Isabella Kaminska (FT Alphaville) offers an interesting perspective on ways in which prudential initiatives in the areas of capital, liquidity and bail-in that have strengthened the banking sector post GFC might be applied to the “real economy”.

The global financial crisis taught us that laissez-faire finance, when left to its own devices, tends to encourage extreme fragility by under capitalising the system for efficiency’s sake and making it far more systemically interdependent.

Pre-2008, banks operated on the thinnest of capital layers while taking extreme liquidity risk due to the presumption that wholesale liquidity markets would always be open and available to them. It was in this way that they saved on capital and liquidity costs and increased their return on equity.  

Regulatory responses to the crisis understandably focused on boosting resilience by hiking capital buffers, liquidity ratios and also by introducing new types of loss absorbing structures. While it’s still too early to claim regulatory efforts were a definitive success, it does seem by and large the measures have worked to stymie a greater financial crisis this time around.

But what the 2008 crisis response may have overlooked is that bolstering banks to protect the economy means very little if the underlying real economy remains as thinly spread and interconnected as the financial sector always used to be.

The assessment that these banking initiatives “means very little” is possibly overstating the case.  The problems we are facing today would be an order of magnitude greater if the banking system was not able to plays its part in the solution.

The core point, however, I think is absolutely on the money, the focus on efficiency comes at the expense of resilience. More importantly, a free market system, populated by economic agents pursuing their own interests shaped by a focus on relatively short term time horizons, does not seem to be well adapted for dealing with this problem on its own. The lessons prudential regulators learned about the limits of efficient markets and market discipline also apply in the real world.

Isabella looks at the way prudential capital and liquidity requirements operate in banking and draws analogies in the real economy. With respect to liquidity, she notes for example,

“… the just-in-time supply chain system can be viewed as the real economy’s version of a fractional reserve system, with reserves substitutable for inventories.  

Meanwhile, the real economy’s presumption that additional inventories can be sourced from third party wholesale suppliers at a price as and when demand dictates, is equivalent to the banking sector’s presumption that liquidity can always be sourced from wholesale markets.

Though there is obviously one important difference.

Unlike the banking sector, the real economy has no lender of last resort that can magically conjure up more intensive care beds or toilet paper at the stroke of a keyboard when runs on such resources manifest unexpectedly.  

So what are our options? Companies could increase their inventories (analogous to holding more liquid assets) or build excess capacity (analogous to building up a capital buffer) but it is very difficult for companies acting independently to do this if their competitors choose the short term cost efficient play and undercut them on price. The Prisoner’s Dilemma trumps market discipline and playing the long game.

Isabella frames the problem as follows:

short-term supply shortages can only be responded to with real world manufacturing capability, which itself is constrained by physical availability To that extent crisis responses can only really take two forms: 1) immediate investment in the build-up of new manufacturing capacity that can address the specific system shortages or, 2) the temporary reallocation of existing resources (with some adaptation cost) to new production purposes.

The problem with the first option is that it is not necessarily time efficient. Not every country has the capability to build two new hospitals from scratch in just 10 days. Nor the capacity to create unexpected supply just-in-time to deal with the problem.

New investment may not be economically optimal either. What happens to those hospitals when the crisis abates? Do they stand empty and idle? Do they get repurposed? Who will fund their maintenance and upkeep if they go unused? And at what cost to other vital services and goods?

Isabella’s proposal …

That leaves the reallocation of existing assets as the only sensible and economically efficient mitigatory response to surge-demand related crises like pandemic flu. But it’s clear that on that front we can be smarter about how we anticipate and prepare for such reallocation shocks. An obvious thing to do is to take a leaf out of banking regulators’ books, especially with regards to bail-inable capital, capital ratios and liquidity profiles.

Isabella offers two examples to illustrate her argument; one is power companies and the other is the health system.

She notes that power utilities manage demand-surge or supply-shock risk with interruptible contracts to industrial clients. She argues that these contracts equate to a type of bail-inable capital buffer, since the contracts allow utilities to temporarily suspend services to clients (at their cost) if and when critical needs are triggered elsewhere and supplies must be diverted.

I think she has a good point about the value of real options but I am less sure that bail-in is the right analogy. Bail-in is a permanent adjustment to the capital structure in which debt is converted to equity or written off. Preferably the former in order to maintain the loss hierarchy that would otherwise apply in liquidation. A contract that enables a temporary adjustment to expenses is a valuable option but not really a bail-in style option.

What she is identifying in this power utility example is more a company buying real options from its customers that reduces operating leverage by enabling the company to reduce the supply of service when it becomes expensive to supply. Companies that have high operating leverage have high fixed costs versus revenue and will, all other things being equal, tend to need to run more conservative financial leverage than companies with low operating leverage. So reduced operating leverage is a substitute for needing to hold more capital.

Isabella then explores the ways in which the liquidity, capital and bail-in analogies might be applied in healthcare. I can quibble with some of the analogies she draws to prudential capital and liquidity requirements. As an example of a capital requirement being applied to health care she proposes that …

“… governments could mandate makers of non-perishable emergency goods (such as medicines, toilet paper, face masks, hand sanitiser) to always keep two-weeks’ worth of additional supply on hand. And companies could also be mandated to maintain some share of total supply chain production capability entirely domestically, making them more resilient to globalised shocks”

 Two weeks supply looks more like a liquidity buffer than a capital buffer but that does not make the ideas any the less worth considering as a way of making the real economy more resilient. The banking system had its crisis during the GFC and the real economy is being tested this time around. There are arguments about whether the changes to banking went far enough but it is clearly a lot better placed to play its part in this crisis than it was in the last. The question Isabella poses is what kinds of structural change will be required to make the real economy more resilient in the face of the next crisis.

Another example of FT Alphaville being a reliable source of ideas and information to help you think more deeply about the world.

Tony (From the Outside)

Thinking aloud about Australian bank ROE

I have been wanting to put something down on the question of Australian major bank ROE for a while. The issue generates a lot of heat but the public discussion I have observed has been truncated, in my opinion, by misconceptions.

I think we can agree that banks need to be profitable to be healthy and a healthy banking system underpins the health of the economy as a whole. Excessive profitability however is clearly bad for consumers, business and for the economy as a whole. The problem is determining what level of profitability is excessive. This post is unlikely to be the final word on this topic but hopefully it introduces a couple of considerations that seem to me to be largely missing from the public debate.

Most of what I read on this topic seems to treat the ROE of the Australian majors as self evidently excessive and focuses on what to do about it. Exhibit A is the reported ROE which in the 2019 half year updates varied from 10.05% to 14.10%. This is much less than it was but still substantially better than what is being achieved by most banks outside Australia and by the smaller local banks. Exhibit B is the fact that the Australian banking system is an oligopoly which almost by definition earn excess profits.

Reported ROE exceeds COE – case closed

Any discussion of ROE must be anchored by the estimated Cost of Equity (COE), the minimum return that investors require to hold equity risk. There are a variety of ways of calculating this but all of them generate a number that is much less than the ROE the majors currently earn. So case closed.

There is no question that the Australian majors cover their cost of equity, but it is less clear to me that the margin of excess profitability is as excessive as claimed.

Corporate finance 101 teaches us that we can derive a company’s cost of equity using the Capital Asset Pricing Model (CAPM) which holds that the required return is equal to the Risk Free Return plus the Equity Risk Premium (ERP) multiplied by the extent to which the return the individual stock is correlated with the market as a whole. The general idea of being paid a premium for taking on equity risk makes sense but there are a bunch of issues with the CAPM once you get into the detail. One of the more topical being what do you do when the risk free rate approaches zero.

I don’t want to get into the detail of those issues here but will assume for the purposes of this post that a rate of return in the order of 8-10% can be defended as a minimum acceptable return. I recognise that some of the more mechanical applications of the CAPM might generate a figure lower than this if they simply apply a fixed ERP to the current risk free rate.

Two reasons why a simple comparison of ROE and COE may be misleading

  1. Banking is an inherently cyclical business and long term investors require a return that compensates them for accepting this volatility in returns.
  2. Book value does not define market value

Banking is a highly cyclical business – who knew?

It is often asserted that banking is a low risk, “utility” style business and hence that shareholders should expect commensurately low returns. The commentators making these assertions tend to focus on the fact that the GFC demonstrated that it is difficult (arguably impossible) to allow large banks to fail without imposing significant collateral damage on the rest of the economy. Banks receive public sector support to varying degrees that reduces their risk of failure and hence the risk to shareholders. A variation of this argument is that higher bank capital requirements post the GFC have reduced the risk of investing in a bank by reducing the risk of insolvency.

There is no question that banks do occupy a privileged space in the economy due to the central bank liquidity support that is not available to other companies. This privilege (sometimes referred to as a “social licence”) is I think an argument for tempering the kinds of ROE targeted by the banks but it does not necessarily make them a true utility style investment whose earnings are largely unaffected by cyclical downturns.

The reality is that bank ROE will vary materially depending on the state of the credit cycle and this inherent cyclicality is probably accentuated by accounting for loan losses and prudential capital requirements. Loan losses for Australian banks are currently (October 2019) close to their cyclical low points and can be expected to increase markedly when the economy eventually moves into a downturn or outright recession. Exactly how much downside in ROE we can expect is open to debate but history suggests that loan losses could easily be 5 times higher than what we observe under normal economic conditions.

There is also the issue of how often this can be expected to happen. Again using history as a guide for the base rate, it seems that downturns might be expected every 7-10 years on average and long periods without a downturn seem to be associated with increased risk of more severe and prolonged periods of reduced economic activity.

What kind of risk premium does an investor require for this cyclicality? The question may be academic for shareholders who seek to trade in and out of bank stocks based on their view of the state of the cycle but I will assume that banks seek to cater to the concerns and interests of long term shareholders. The answer for these shareholders obviously depends on how frequent and how severe you expect the downturns to be, but back of the envelope calculations suggest to me that you would want ROE during the benign part of the credit cycle to be at least 200bp over the COE and maybe 300bp to compensate for the cyclical risk.

Good risk management capabilities can mitigate this inherent volatility but not eliminate it; banks are inherently cyclical investments on the front line of the business cycle. Conversely, poor risk management or an aggressive growth strategy can have a disproportionately negative impact. It follows that investors will be inclined to pay a premium to book value for banks they believe have good risk management credentials. I will explore this point further in the discussion of book value versus market value.

Book Value versus Market Value

Apart from the cyclical factors discussed above, the simple fact that ROE is higher than COE is frequently cited as “proof” that ROE is excessive. It is important however to examine the unstated assumption that the market value of a bank should be determined by the book value of its equity. To the best of my knowledge, there is no empirical or conceptual basis for this assumption. There are a number of reasons why a company’s share price might trade at a premium or a discount to its book value as prescribed by the relevant accounting standards.

The market may be ascribing value to assets that are not recognised by the accounting standards.The money spent on financial control and risk management, for example, is largely expensed and hence not reflected in the book value of equity. That value however becomes apparent when the bank is under stress. These “investments” cannot eliminate the inherent cyclicality discussed above but they do mitigate those risk.

A culture built on sound risk management and financial control capabilities is difficult to value and won’t be reflected in book value except to the extent it results in conservative valuation and provisioning outcomes. It is however worth something. Investors will pay a premium for the banks they believe have these intangible strengths while discounting or avoiding altogether the shares of banks they believe do not.

Summing up …

This post is in no way an exhaustive treatment of the topic. Its more modest objective was simply to offer a couple of issues to consider before jumping to the conclusion that the ROE earned by the large Australian banks is excessive based on simplistic comparisons of point in time ROE versus mechanical derivations of the theoretical COE.

As always, it is entirely possible that I am missing something – if so let me know what it is ….

Tony

Bank funding costs and capital structure – what I missed

A recent post looked at a Bank of England paper that offered evidence that the cost of higher capital requirements will be mitigated by a reduction in leverage risk which translates into lower borrowing costs and a decline in the required return equity. My post set out some reasons why I struggled with this finding.

My argument was that,

  • in banking systems where the senior debt rating of banks assumed to be Too Big To Fail is supported by an implied assumption of government support (such as Australia),
  • increasing the level of subordinated debt could reduce the value of that implied support,
  • however, senior debt itself does not seem to be any less risky (the senior debt rating does not improve), and
  • the subordinated debt should in theory be more risky if it reduces the value of the assumption of government support.

Fortunately, I also qualified my observations with the caveat that it was possible that I was missing something. Recent issuance of Tier 2 debt by some Australian banks offers some more empirical evidence that does seem to suggest that the cost of senior debt can decline in response to the issuance of more junior securities and that the cost of subordinated debt does not seem to be responding in the way that the theory suggests.

My original argument was I think partly correct. The prospect of the large Australian banks substantially increasing the relative share of Tier 2 debt in their liability structure has not resulted in any improvement in the AA- senior debt rating of the banks subject to this Total Loss Absorbing Capital requirement. So senior debt does not seem to be any less risky.

What I missed was the impact of the supply demand dynamic in a low interest rate environment where safe assets are in very short supply.

The senior debt in my thesis is no less risky but the debt market appears to be factoring in the fact that the pool of AA- senior debt is likely to shrink relative to what was previously expected. Investors who have been struggling for some time to find relatively safe assets with a decent yield weigh up the options. A decent yield on safe assets like they used to get in the old days would obviously be preferable but that is not on offer so they pay up to get a share of what is on offer.

The subordinated debt issued by these banks might be more risky in theory to the extent that bail-in is now more credible but if you do the analysis and conclude that the bank is well managed and low risk then you discount the risk of being bailed-in and take the yield. Again the ultra low yield on very safe assets and the shortage of better options means that you probably bid strongly to get a share of the yield on offer.

Summing up. The impacts on borrowing costs described here may look the same as what would be expected if the Modigliani-Miller effect was in play but the underlying driver appears to be something else.

It remains possible that I am still missing something but hopefully this post moves me a bit closer to a correct understanding of how capital structure impacts bank funding costs …

Tony

The “skin in the game” argument for more common equity

One of the traditional arguments for higher common equity requirements is that it increases the shareholders’ “skin in the game” and thereby creates an incentive to be more diligent and conservative in managing risk.

This principle is true up to a point but I believe more common equity mostly generates this desirable risk management incentive when the extra skin in the game (aka capital) is addressing a problem of too little capital. It is much less obvious that more capital promotes more conservative risk appetite for a bank that already has a strong capital position.

In the “too little” capital scenarios, shareholders confronted with a material risk of failure, but limited downside (because they have only a small amount of capital invested), have an incentive to take large risks with uncertain payoffs. That is clearly undesirable but it is not a fair description of the risk reward payoff confronting bank shareholders who have already committed substantial increased common equity in response to the new benchmarks of what it takes to be deemed a strong bank.

The European Systemic Risk Board published some interesting research on this question in a paper titled “Has regulatory capital made banks safer? Skin in the game vs moral hazard” . I have copied the abstract below which summarises the key conclusions.

Abstract: The paper evaluates the impact of macroprudential capital regulation on bank capital, risk taking behaviour, and solvency. The identification relies on the policy change in bank-level capital requirements across systemically important banks in Europe. A one percentage point hike in capital requirements leads to an average CET1 capital increase of 13 percent and no evidence of reduction in assets. The increase in capital comes at a cost. The paper documents robust evidence on the existence of substitution effects toward riskier assets. The risk taking behavior is predominantly driven by large and less profitable banks: large wholesale funded banks show less risk taking, and large banks relying on internal ratings based approach successfully disguise their risk taking. In terms of overall impact on solvency, the higher risk taking crowds-out the positive effect of increased capital.

I have only skimmed the paper thus far and have reservations regarding how they measure increased risk. As I understand it, the increased riskiness the analysis measures is based on increases in average risk weights. It was not clear how the analysis distinguished changes in portfolio riskiness from changes in the risk weight measure. That said, the overall conclusions seem intuitively right.

Tony

What is the value of information in the money market?

“Debt and institutions dealing with debt have two faces: a quiet one and a tumultuous one …. The shift from an information-insensitive state where liquidity and trust prevails because few questions need to be asked, to an information-sensitive state where there is a loss of confidence and a panic may break out is part of the overall system: the calamity is a consequence of the quiet. This does not mean that one should give up on improving the system. But in making changes, it is important not to let the recent crisis dominate the new designs. The quiet, liquid state is hugely valuable.”

Bengt Holmstrom (2015)

The quote above comes from an interesting paper by Bengt Holmstrom that explores the ways in which the role money markets play in the financial system is fundamentally different from that played by stock markets. That may seem like a statement of the obvious but Holmstrom argues that some reforms of credit markets which based on the importance of transparency and detailed disclosure are misconceived because they do not reflect these fundamental differences in function and mode of operation.

Holmstrom argues that the focus and purpose of stock markets is price discovery for the purpose of allocating risk efficiently. Money markets, in contrast are about obviating the need for price discovery in order to enhance the liquidity of the market. Over-collateralisation is one of the features of the money market that enable deep, liquid trading to occur without the need to understand the underlying risk of the assets that are being funded .

 “The design of money market policies and regulations should recognise that money markets are very different from stock markets. Lessons from the latter rarely apply to the former, because markets for risk-sharing and markets for funding have their own separate logic. The result is two coherent systems with practices that are in almost every respect polar opposites.”

From “Understanding the role of debt in the financial system” Bengt Holmstrom (BIS Working Papers No 479 – January 2015)

Holmstrom appears to have written the paper in response to what he believes are misconceived attempts to reform credit markets in the wake of the recent financial crisis. These reforms have often drawn on insights grounded in our understanding of stock markets where information and transparency are key requirements for efficient price discovery and risk management. His paper presents a perspective on the logic of credit markets and the structure of debt contracts that highlights the information insensitivity of debt. This perspective explains among other things why he believes that information insensitivity is the natural and desired state of the money markets.

Holmstrom notes that one of the puzzles of the GFC was how people traded so many opaque instruments with a seeming ignorance of their real risk. There is a tendency to see this as a conspiracy by bankers to confuse and defraud customers which in turn has prompted calls to make money market instruments more transparent. While transparency and disclosure is essential for risk pricing and allocation, Holmstrom argues that this is not the answer for money markets because they operate on different principles and serve a different function.

 “I will argue that a state of “no questions asked” is the hallmark of money market liquidity; that this is the way money markets are supposed to look when they are functioning well.”

“Among economists, the mistake is to apply to money markets the lessons and logic of stock markets.”

“The key point I want to communicate today is that these two markets are built on two entirely different, one could say diametrically opposite, logics. Ultimately, this is because they serve two very different purposes. Stock markets are in the first instance aimed at sharing and allocating aggregate risk. To do that effectively requires a market that is good at price discovery.

 “But the logic behind transparency in stock markets does not apply to money markets. The purpose of money markets is to provide liquidity for individuals and firms. The cheapest way to do so is by using over-collateralised debt that obviates the need for price discovery. Without the need for price discovery the need for public transparency is much less. Opacity is a natural feature of money markets and can in some instances enhance liquidity, as I will argue later.”

“Why does this matter? It matters because a wrong diagnosis of a problem is a bad starting point for remedies. We have learned quite a bit from this crisis and we will learn more. There are things that need to be fixed. But to minimise the chance of new, perhaps worse mistakes, we need to analyse remedies based on the purpose of liquidity provision. In particular, the very old logic of collateralised debt and the natural, but sometimes surprising implications this has for how information and risk are handled in money markets, need to be properly appreciated.”

There is a section of the paper titled “purposeful opacity” which, if I understood him correctly, seemed to extend his thesis on the value of being able to trade on an “information insensitive” basis to argue that “opacity” in the debt market is something to be embraced rather than eliminated. I struggled with embracing opacity in this way but that in no way diminishes the validity of the distinction he draws between debt and equity markets.

The other useful insight was the way in which over-collateralistion (whether explicit or implicit) anchors the liquidity of the money market. His discussion of why the sudden transition from a state in which the creditworthiness of a money market counter-party is taken for granted to one in which doubt emerges also rings true.

The remainder of this post mostly comprises extracts from the paper that offer more detail on the point I have summarised above. The paper is a technical one but worth the effort for anyone interested in the question of how banks should finance themselves and the role of debt in the financial system.

Money markets versus stock markets

Holmstrom argues that each system displays a coherent internal logic that reflects its purpose but these purposes are in many respects polar opposites.

Stock markets are primarily about risk sharing and price discovery. As a consequence, these markets are sensitive to information and value transparency. Traders are willing to make substantial investments to obtain this information. Liquidity is valuable but equity investors will tend to trade less often and in lower volumes than debt markets.

Money markets, in contrast, Holmstrom argues are primarily about liquidity provision and lending. The price discovery process is much simpler but trading is much higher volume and more urgent.

“The purpose of money markets is to provide liquidity. Money markets trade in debt claims that are backed, explicitly or implicitly, by collateral.

 “People often assume that liquidity requires transparency, but this is a misunderstanding. What is required for liquidity is symmetric information about the payoff of the security that is being traded so that adverse selection does not impair the market. Without symmetric information adverse selection may prevent trade from taking place or in other ways impair the market (Akerlof (1970)).”

“Trading in debt that is sufficiently over-collateralised is a cheap way to avoid adverse selection. When both parties know that there is enough collateral, more precise private information about the collateral becomes irrelevant and will not impair liquidity.”

The main purpose of stock markets is to share and allocate risk … Over time, stock markets have come to serve other objectives too, most notably governance objectives, but the pricing of shares is still firmly based on the cost of systemic risk (or a larger number of factors that cannot be diversified). Discovering the price of systemic risk requires markets to be transparent so that they can aggregate information efficiently.     

Purposeful opacity

“Because debt is information-insensitive … traders have muted incentives to invest in information about debt. This helps to explain why few questions were asked about highly rated debt: the likelihood of default was perceived to be low and the value of private information correspondingly small.”

 Panics: The ill consequences of debt and opacity

“Over-collateralised debt, short debt maturities, reference pricing, coarse ratings, opacity and “symmetric ignorance” all make sense in good times and contribute to the liquidity of money markets. But there is a downside. Everything that adds to liquidity in good times pushes risk into the tail. If the underlying collateral gets impaired and the prevailing trust is broken, the consequences may be cataclysmic”

“The occurrence of panics supports the informational thesis that is being put forward here. Panics always involve debt. Panics happen when information-insensitive debt (or banks) turns into information-sensitive debt … A regime shift occurs from a state where no one feels the need to ask detailed questions, to a state where there is enough uncertainty that some of the investors begin to ask questions about the underlying collateral and others get concerned about the possibility”

These events are cataclysmic precisely because the liquidity of debt rested on over-collateralisation and trust rather than a precise evaluation of values. Investors are suddenly in the position of equity holders looking for information, but without a market for price discovery. Private information becomes relevant, shattering the shared understanding and beliefs on which liquidity rested (see Morris and Shin (2012) for the general mechanism and Goldstein and Pauzner (2005) for an application to bank runs).

Would transparency have helped contain the contagion?

“A strong believer in the informational efficiency of markets would argue that, once trading in credit default swaps (CDS) and then the ABX index began, there was a liquid market in which bets could be made both ways. The market would find the price of systemic risk based on the best available evidence and that would serve as a warning of an imminent crisis. Pricing of specific default swaps might even impose market discipline on the issuers of the underlying debt instruments”

 Shadow banking

 “The rapid growth of shadow banking and the use of complex structured products have been seen as one of the main causes of the financial crisis. It is true that the problems started in the shadow banking system. But before we jump to the conclusion that shadow banking was based on unsound, even shady business practices, it is important to try to understand its remarkable expansion. Wall Street has a hard time surviving on products that provide little economic value. So what drove the demand for the new products?”

 “It is widely believed that the global savings glut played a key role. Money from less developed countries, especially Asia, flowed into the United States, because the US financial system was perceived to be safe … More importantly, the United States had a sophisticated securitisation technology that could activate and make better use of collateral … Unlike the traditional banking system, which kept mortgages on the banks’ books until maturity, funding them with deposits that grew slowly, the shadow banking system was highly scalable. It was designed to manufacture, aggregate and move large amounts of high-quality collateral long distances to reach distant, sizable pools of funds, including funds from abroad.”

“Looking at it in reverse, the shadow banking system had the means to create a lot of “parking space” for foreign money. Securitisation can manufacture large amounts of AAA-rated securities provided there is readily available raw material, that is, assets that one can pool and tranche”

“I am suggesting that it was an efficient transportation network for collateral that was instrumental in meeting the global demand for safe parking space.”

 “The distribution of debt tranches throughout the system, sliced and diced along the way, allowing contingent use of collateral”

“Collateral has been called the cash of shadow banking (European repo council (2014)). It is used to secure large deposits as well as a host of derivative transactions such as credit and interest rate swaps.”  

There is a relatively recent, but rapidly growing, body of theoretical research on financial markets where the role of collateral is explicitly modelled and where the distinction between local and global collateral is important

“Viewed through this theoretical lens, the rise of shadow banking makes perfectly good sense. It expanded in response to the global demand for safe assets. It improved on traditional banking by making collateral contingent on need and allowing it to circulate faster and attract more distant capital. In addition, securitisation created collateral of higher quality (until the crisis, that is) making it more widely acceptable. When the crisis hit, bailouts by the government, which many decry, were inevitable. But as just discussed, the theory supports the view that bailouts were efficient even as an ex ante policy (if one ignores potential moral hazard problems). Exchanging impaired collateral for high-quality government collateral, as has happened in the current crisis (as well as historically with clearing houses), can be rationalised on these grounds.”

 Some policy implications

 A crisis ends only when confidence returns. This requires getting back to the no-questions-asked state ….

Transparency would likely have made the situation worse

“By now, the methods out of a crisis appear relatively well understood. Government funds need to be committed in force (Geithner (2014)). Recapitalisation is the only sensible way out of a crisis. But it is much less clear how the banking system, and especially shadow banking, should be regulated to reduce the chance of crisis in the first place.  The evidence from the past panic suggests that greater transparency may not be that helpful.”

“The logic of over-capitalisation in money markets leads me to believe that higher capital requirements and regular stress tests is the best road for now.”

“Transparency can provide some market discipline and give early warning of trouble for individual banks. But it may also lead to strategic behaviour by management. The question of market discipline is thorny. In good times market discipline is likely to work well. The chance that a bank that is deemed risky would trigger a panic is non-existent and so the bank should pay the price for its imprudence. In bad times the situation is different. The failure of a bank could trigger a panic. In bad times it would seem prudent to be less transparent with the stress tests (for some evidence in support of this dichotomy, see Machiavelli (1532)).”

Does more loss absorption and “orderly resolution” eliminate the TBTF subsidy?

The Australian Government’s 2014 Financial System Inquiry (FSI) recommended that APRA implement a framework for minimum loss-absorbing and recapitalisation capacity in line with emerging international practice, sufficient to facilitate the orderly resolution of Australian authorised deposit-taking institutions (ADIs) and minimise taxpayer support (Recommendation 3).

In early November, APRA released a discussion paper titled “Increasing the loss absorption capacity of ADIs to support orderly resolution” setting out its response to this recommendation. The paper proposes that selected Australian banks be required to hold more loss absorbing capital. Domestic Systemically Important Banks (DSIBs) are the primary target but, depending partially on how their Recovery and Resolution Planning addresses the concerns APRA has flagged, some other banks will be captured as well.

The primary objectives are to improve financial safety and stability but APRA’s assessment is that competition would also be “Marginally improved” on the basis that “requiring larger ADIs to maintain additional loss absorbency may help mitigate potential funding advantages that flow to larger ADIs“. This assessment may be shaped by the relatively modest impact (5bp) on aggregate funding costs that APRA has estimated or simple regulatory conservatism. I suspect however that APRA is under selling the extent to which the TBTF advantage would be mitigated if not completely eliminated by the added layer of loss absorption proposed. If I am correct, then this proposal would in fact, not only minimise the risk to taxpayers of future banking crises, but also represent an important step forward in placing Australian ADIs on a more level playing field.

Why does the banking system need more loss absorption capacity?

APRA offers two reasons:

  1. The critical role financial institutions play in the economy means that they cannot be allowed to fail in a disorderly manner that would have adverse systemic consequences for the economy as a whole.
  2. The government should not be placed in a position where it believes it has no option but to bail out one or more banks

The need for extra capital might seem counter-intuitive, given that ADI’s are already “unquestionably strong”, but being unquestionably strong is not just about capital, the unstated assumption is that the balance sheet and business model are also sound. The examples that APRA has used to calibrate the degree of total loss absorption capacity could be argued to reflect scenarios in which failures of management and/or regulation have resulted in losses much higher than would be expected in a well-managed banking system dealing with the normal ups and downs of the business cycle.

At the risk of over simplifying, we might think of the first layers of the capital stack (primarily CET1 capital but also Additional Tier 1) being calibrated to the needs of a “good bank” (i.e. well-managed, well-regulated) while the more senior components (Tier 2 capital) represent a reserve to absorb the risk that the good bank turns out to be a “bad bank”.

What form will this extra capital take?

APRA concludes that ADI’s should be required to hold “private resources” to cope with this contingency. I doubt that conclusion would be contentious but the issue is the form this self-insurance should take. APRA proposes that the additional loss absorption requirement be implemented via an increase in the minimum Prudential Capital Requirement (PCR) applied to the Total Capital Ratio (TCR) that Authorised Deposit-Taking Institutions (ADIs) are required to maintain under Para 23 of APS 110.

“The minimum PCRs that an ADI must maintain at all times are:
(a) a Common Equity Tier 1 Capital ratio of 4.5 per cent;
(b) a Tier 1 Capital ratio of 6.0 per cent; and
(c) a Total Capital ratio of 8.0 per cent.
APRA may determine higher PCRs for an ADI and may change an ADI’s PCRs at any time.”

APS 110 Paragraph 23

This means that banks have discretion over what form of capital they use, but APRA expect that banks will use Tier 2 capital that counts towards the Total Capital Ratio as the lowest cost way to meet the requirement. Advocates of the capital structure irrelevance thesis would likely take issue with this part of the proposal. I believe APRA is making the right call (broadly speaking) in supporting more Tier 2 rather than more CET1 capital, but the pros and cons of this debate are a whole post in themselves. The views of both sides are also pretty entrenched so I doubt I will contribute much to that 50 year old debate in this post.

How much extra loss absorbing capital is required?

APRA looked at three things when calibrating the size of the additional capital requirement

  • Losses experienced in past failures of systemically important banks
  • What formal requirements other jurisdictions have applied to their banks
  • The levels of total loss absorption observed being held in an international peer group (i.e. what banks choose to hold independent of prudential minimums)

Based on these inputs, APRA concluded that requiring DSIBs to maintain additional loss absorbing capital of between 4-5 percentage points of RWA would be an appropriate baseline setting to support orderly resolution outcomes. The calibration will be finalised following the conclusion of the consultation on the discussion paper but this baseline requirement looks sufficient to me based on what I learned from being involved in stress testing (for a large Australian bank).

Is more loss absorption a good idea?

The short answer, I think, is yes. The government needs a robust way to recapitalise banks which does not involve risk to the taxpayer and the only real alternative is to require banks to hold more common equity.

The devil, however, is in the detail. There are a number of practical hurdles to consider in making it operational and these really need to be figured out (to the best of out ability) before the fact rather than being made up on the fly under crisis conditions.  The proposal also indirectly raises some conceptual issues with capital structure that are worth understanding.

How would it work in practice?

The discussion paper sets out “A hypothetical outcome from resolution action” to explain how an orderly resolution could play out.

“The approximate capital levels the D-SIBs would be expected to maintain following an increase to Total Capital requirements, and a potential outcome following the use of the additional loss absorbency in resolution, are presented in Figure 6. Ultimately, the outcome would depend on the extent of losses.

If the stress event involved losses consistent with the largest of the FSB study (see Figure 2), AT1 and Tier 2 capital instruments would be converted to ordinary shares or written off. After losses have been considered, the remaining capital position would be wholly comprised of CET1 capital. This conversion mechanism is designed to allow for the ADI to be stabilised in resolution and provide scope to continue to operate, and particularly to continue to provide critical functions.”

IMG_5866.JPG

Source – APRA Discussion Paper (page 24)

What I have set out below draws from APRA’s example while adding detail that hopefully adds some clarity on what should be expected if these scenarios ever play out.

  • In a stress event, losses first impact any surplus CET1 held in excess of the Capital Conservation Buffer (CCB) requirement, and then the CCB itself (the first two layers of loss absorption in Figure 6 above)
  • As the CCB is used up, the ADI is subject to progressive constraints on discretionary distributions on CET1 and AT1 capital instruments
  • In the normal course of events, the CCB should be sufficient to cope with most stresses and the buffer is progressively rebuilt through profit retention and through new issuance, if the ADI wants to accelerate the pace of the recapitalisation process
  • The Unquestionably Strong capital established to date is designed to be sufficient to allow ADIs to withstand quite severe expected cyclical losses (as evidenced by the kinds of severe recession stress scenarios typically used to calibrate capital buffers)
  • In more extreme scenarios, however, the CCB is overwhelmed by the scale of losses and APRA starts to think about whether the ADI has reached a Point of Non-Viability (PONV) where ADI’s find themselves unable to fund themselves or to raise new equity; this is where the proposals in the Discussion Paper come into play
  • The discussion paper does not consider why such extreme events might occur but I have suggested above that one reason is that the scale of losses reflects endogenous weakness in the ADI (i.e. failures of risk management, financial control, business strategy) which compound the losses that would be a normal consequence of downturns in the business cycle
  • APRA requires that AT1 capital instruments, classified as liabilities under Australian Accounting Standards, must include a provision for conversion into ordinary shares or write off when the CET1 capital ratio falls to, or below 5.125 per cent
  • In addition, AT1 and Tier 2 capital instruments must contain a provision, triggered on the occurrence of a non-viability trigger event, to immediately convert to ordinary shares or be written off
  • APRA’s simple example show both AT1 and Tier 2 being converted to CET1 (or write-off) such that the Post Resolution capital structure is composed entirely of CET1 capital

Note that conversion of the AT1 and Tier 2 instruments does not in itself allocate losses to these instruments. The holders receive common equity equivalent to the book value of their instrument which they can sell or hold. The ordinary shareholders effectively bear the loss via the forced dilution of their shareholdings. The main risk to the ATI and Tier 2 holders is that, when they sell the ordinary shares received on conversion, they may not get the same price that which was used to convert their instrument. APRA also imposes a floor on the share price that is used for conversion which may mean that the value of ordinary shares received is less than the face value of the instrument being converted. The reason why ordinary shareholders should be protected in this way under a resolution scenario is not clear.

The devil is in the detail – A short (probably incomplete) list of issues I see with the proposal:

  1. Market capacity to supply the required quantum of additional Tier 2 capital required
  2. Conversion versus write-off
  3. The impact of conversion on the “loss hierarchy”
  4. Why not just issue more common equity?
  5. To what extent would the public sector continue to stand behind the banking system once the proposed level of self insurance is in place?

Market capacity to supply the required level of additional loss absorption

APRA has requested industry feedback on whether market appetite for Tier 2 capital will be a problem but its preliminary assessment is that:

” … individual ADIs and the industry will have the capacity to implement the changes necessary to comply with the proposals without resulting in unnecessary cost for ADIs or the broader financial system.

Preliminary estimates suggest the total funding cost impact from increasing the D-SIBs’Total Capital requirements would not be greater than five basis points in aggregate based on current spreads. Assuming the D-SIBs meet the increased requirement by increasing the issuance of Tier 2 capital instruments and reducing the issuance of senior unsecured debt, the impact is estimated by observing the relative pricing of the different instruments. The spread difference between senior unsecured debt and Tier 2 capital instruments issued by D- SIBs is around 90 to 140 basis points.”

I have no expert insights on this question beyond a gut feel that the required level of Tier 2 capital cannot be raised without impacting the current spread between Tier 2 capital and senior debt, if at all. The best (only?) commentary I have seen to date is by Chris Joye writing in the AFR (see here and here). The key points I took from his opinion pieces are:

  • The extra capital requirement translates to $60-$80 billion of extra bonds over the next four years (on top of rolling over existing maturities)
  • There is no way the major banks can achieve this volume
  • Issuing a new class of higher ranking (Tier 3) bonds is one option, though APRA also retains the option of scaling back the additional Tier 2 requirement and relying on its existing ability to bail-in senior debt

Chris Joye know a lot more about the debt markets than I do, but I don’t think relying on the ability to bail-in senior debt really works. The Discussion Paper refers to APRA’s intention that the “… proposed approach is … designed with the distinctive features of the Australian financial system in mind, recognising the role of the banking system in channelling foreign savings into the economy “ (Page 4). I may be reading too much into the tea leaves, but this could be interpreted as a reference to the desirability of designing a loss absorbing solution which does not adversely impact the senior debt rating that helps anchor the ability of the large banks to borrow foreign savings. My rationale is that the senior debt rating impacts, not only the cost of borrowing, but also the volume of money that foreign savers are willing to entrust with the Australian banking system and APRA specifically cites this factor as shaping their thinking. Although not explicitly stated, it seems to me that APRA is trying to engineer a solution in which the D-SIBs retain the capacity to raise senior funding with a “double A” rating.

Equally importantly, the creation of a new class of Tier 3 instruments seems like a very workable alternative to senior bail-in that would allow the increased loss absorption target to be achieved without impacting the senior debt rating. This will be a key issue to monitor when ADI’s lodge their response to the discussion paper. It also seems likely that the incremental cost of the proposal on overall ADI borrowing costs will be higher than the 5bp that APRA included in the discussion paper. That is not a problem in itself to the extent this reflects the true cost of self insurance against the risk of failure, just something to note when considering the proposal.

Conversion versus write-off

APRA has the power to effect increased loss absorption in two ways. One is to convert the more senior elements of the capital stack into common equity but APRA also has the power to write these instruments off. Writing off AT1 and/or T2 capital, effectively represents a transfer of value from the holders of these instruments to ordinary shareholders. That is hard to reconcile with the traditional loss hierarchy that sees common equity take all first losses, with each of the more senior tranches progressively stepping up as the capacity of more junior tranches is exhausted.

Consequently I assume that the default option would always favour conversion over write-off. The only place that I can find any guidance on this question is Attachment J to APS 111 (Capital Adequacy) which states

Para 11. “Where, following a trigger event, conversion of a capital instrument:

(a)  is not capable of being undertaken;

(b)  is not irrevocable; or

(c) will not result in an immediate and unequivocal increase in Common Equity Tier 1 Capital of the ADI,

the amount of the instrument must immediately and irrevocably be written off in the accounts of the ADI and result in an unequivocal addition to Common Equity Tier 1 Capital.”

That seems to offer AT1 and Tier 2 holders comfort that they won’t be asked to take losses ahead of common shareholders but the drafting of the prudential standard could be clearer if there are other reasons why APRA believe a write-off might be the better resolution strategy. The holders need to understand the risks they are underwriting but ambiguity and uncertainty are to helpful when the banking system is in, or a risk of, a crisis.

The impact of conversion on the “loss hierarchy”

The concept of a loss hierarchy describes the sequence under which losses are first absorbed by common equity and then by Additional Tier 1 and Tier 2 capital, if the more junior elements prove insufficient. Understanding the loss hierarchy is I think fundamental to understanding capital structure in general and this proposal in particular:

  • In a traditional liquidation process, the more senior elements should only absorb loss when the junior components of the capital stack are exhausted
  • In practice, post Basel III, the more senior elements will be required to participate in recapitalising the bank even though there is still some book equity and the ADI technically solvent (though not necessarily liquid)
  • This is partly because the distributions on AT1 instruments are subject to progressively higher capital conservation restrictions as the CCB shrinks but mostly because of the potential for conversion to common equity (I will ignore the write-off option to keep things simple)

I recognise that APRA probably tried to simplify this explanation but the graphic example they used (see Figure 6 above) to explain the process shows the Capital Surplus and the CCB (both CET1 capital) sitting on top of the capital stack followed by Tier 2, Additional Tier 1 and finally the minimum CET1 capital. The figure below sets out what I think is a more logical illustration of the capital stack and loss .

IMG_2739

Losses initially impact CET1 directly by reducing net tangible assets per share. At the point of a non-viability based conversion event, the losses impact ordinary shareholders via the dilution of their shareholding. AT1 and Tier 2 holders only share in these losses to the extent that they sell the ordinary shares they receive for less than the conversion price (or if the conversion price floor results in them receiving less than the book value of their holding).

Why not just issue more common equity?

Capital irrelevancy M&M purists will no doubt roll their eyes and say surely APRA knows that the overall cost of equity is not impacted by capital structure tricks. The theory being that any saving in the cost of using lower cost instruments, will be offset by increases in the costs (or required return) of more subordinated capital instruments (including equity).

So this school argues you should just hold more CET1 and the cost of the more senior instruments will decline. The practical problem I think is that, the cost of senior debt already reflects the value of the implied support of being too big, or otherwise systemically important, to be allowed to fail. The risk that deposits might be exposed to loss is even more remote partly due to deposit insurance but, possibly more importantly, because they are deeply insulated from risk by the substantial layers of equity and junior ranking liabilities that must be exhausted before assets are insufficient to cover deposit liabilities.

To what extent would the public sector continue to stand behind the banking system once the proposed level of self insurance is in place?

Assuming the market capacity constraint question could be addressed (which I think it can), the solution that APRA has proposed seems to me to give the official family much greater options for dealing with future banking crises without having to call on the taxpayer to underwrite the risk of recapitalising failed or otherwise non-viable banks.

It does not, however, eliminate the need for liquidity support. I know some people argue that this is a distinction without a difference but I disagree. The reality is that banking systems built on mostly illiquid assets will likely face future crises of confidence where the support of the central bank will be necessary to keep the financial wheels of the economy turning.

There are alternative ways to construct a banking system. Mervyn King, for example, has advocated a version of the Chicago Plan under which all bank deposits must be 100% backed by liquid reserves that would be limited to safe assets such as government securities or reserves held with the central bank. Until we decide to go down that path, or something similar, the current system requires the central bank to be the lender of last resort. That support is extremely valuable and is another design feature that sets banks apart from other companies. It is not the same however, as bailing out a bank via a recapitalisation.

Conclusion

I have been sitting on this post for a few weeks while trying to consider the pros and cons. As always, the risk remains that I am missing something. That said, this looks to me like a necessary (and I would argue desirable) enhancement to the Australian financial system that not only underpins its safety and stability but also takes us much closer to a level playing field. Big banks will always have the advantage of sophistication, scale and efficiency that comes with size but any funding cost advantage associated with being too big to fail now looks to be priced into the cost of the additional layers of loss absorption this proposal would require them to put in place.

Tony

Will Expected Loss loan provisioning reduce pro cyclicality?

I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had  to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.

Borio’s position is that:

  • Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
  • But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
  • So there is a need to develop what he calls “capital filters” (a generic term encompassing   capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.

There are two ways to respond to Claudio Borio’s observations on this topic:

  1. One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
  2. The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions

Will ECL reduce the risk of pro cyclicality?

It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.

The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.

This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.

The solution to pro cyclicality lies in capital not expected loss

So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:

  • let accounting standards do what they do, don’t try to make them solve prudential problems;
  • construct a capital adequacy solution that complements the accounting based measurement of capital and profits.

Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.

In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Canada innovates in the capital buffer space

The Canadian prudential regulator (OFSI) has made an interesting contribution to the capital buffer space via its introduction of a Domestic Stability Buffer (DSB).

Key features of the Domestic Stability Buffer:

  • Applies only to Domestic Systemically Important Banks (D-SIB) and intended to cover a range of systemic vulnerabilities not captured by the Pillar 1 requirement
  • Vulnerabilities currently included in the buffer include (i) Canadian consumer indebtedness; (ii) asset imbalances in the Canadian market and (iii) Canadian institutional indebtedness
  • Replaces a previously undisclosed Pillar 2 loading associated with this class of risks (individual banks may still be required to hold a Pillar 2 buffer for idiosyncratic risks)
  • Initially set at 1.5% of Total RWA and will be in the range of 0 to 2.5%
  • Reviewed semi annually (June and December); with the option to change more frequently in exceptional circumstances
  • Increases phased in while decreases take effect immediately

Implications for capital planning:

  • DSB supplements the Pillar 1 buffers (Capital Conservation Buffer, D-SIB surcharge and the Countercyclical Buffer)
  • Consequently, the DSB will not result in banks being subject to the automatic constraints on capital distributions that are applied by the Pillar 1 buffers
  • Banks will be required to disclose that the buffer has been breached and the OFSI will require a remediation plan to restore the buffer

What is interesting:

  • The OFSI argues that translating the existing Pillar 2 requirement into an explicit buffer offers greater transparency which in turn “… will support banks’ ability to use this capital buffer in times of stress by increasing the market’s understanding of the purpose of the buffer and how it should be used”
  • I buy the OFSI rationale for why an explicit buffer with a clear narrative is a more usable capital tool than an undisclosed Pillar 2 requirement with the same underlying rationale
  • The OFSI retains a separate Countercyclical Buffer but this Domestic Stability Buffer seems similar but not identical in its over-riding purpose (to me at least) to the approach that the Bank of England (BoE) has adopted for managing the Countercyclical Buffer.
  • A distinguishing feature of both the BoE and OFSI approaches is linking the buffer to a simple, coherent narrative that makes the buffer more usable by virtue of creating clear expectations of the conditions under which the buffer can be used.

Bottom line is that I see useful features in both the BoE and OFSI approach to dealing with the inherent cyclicality of banking.  I don’t see  either of the proposals doing much to mitigate the cyclicality of banking but I do see them offering more potential for managing the consequences of that cyclicality. Both approaches seem to me to offer material improvements over the Countercyclical Buffer as originally conceived by the BCBS.

It will be interesting to see if APRA chooses to adapt elements of this counter cyclical approach to bank capital requirements.

If I am missing something, please let me know …

From the Outside

The answer is more loan loss provisions, what was the question?

I had been intending to write a post on the potential time bomb for bank capital embedded in IFSR9 but Adrian Docherty has saved me the trouble. He recently released an update on IFRS9 and CECL titled Much Ado About Nothing or Après Moi. Le Deluge?

This post is fairly technical so feel free to stop here if you are not a bank capital nerd. However, if you happen to read someone saying that IFRS 9 solves one of the big problems encountered by banks during the GFC then be very sceptical. Adrian (and I) believe that is very far from the truth. For those not discouraged by the technical warning, please read on.

The short version of Adrian’s note is:

  • The one-off transition impact of the new standard is immaterial and the market has  largely ignored it
  • Market apathy will persist until stressed provisions are observed
  • The dangers of ECL provisioning (procyclical volatility, complexity and subjectivity) have been confirmed by the authorities …
  • … but criticism of IFRS 9 is politically incorrect since the “correct” narrative is that earlier loan loss provisioning fulfils the G20 mandate to address the problem encountered during the GFC
  • Regulatory adaption has been limited to transition rules, which are not a solution. We need a fundamentally revised Basel regime – “Basel V” – in which lifetime ECL provisions somehow offset regulatory capital requirements.

Adrian quotes at length from Bank of England (BoE) commentary on IFRS 9. He notes that their policy intention is that the loss absorbing capacity of the banking system is not impacted by the change in accounting standards but he takes issue with the way that they have chosen to implement this policy approach. He also calls out the problem with the BoE instruction that banks should assume “perfect foresight” in their stress test calculations.

Adrian also offers a very useful deconstruction of what the European Systemic Risk Board had to say in a report they published in July 2017 . He has created a table in which he sets out what the report says on one column and what they mean in another (see page 8 of Adrian’s note).

This extract from Adrian’s note calls into question whether the solution developed is actually what the G20 asked for …

“In official documents, the authorities still cling to the assertion that ECL provisioning is good for financial stability “if soundly implemented” or “if properly applied”. They claim that the new standard “means that provisions for potential credit losses will be made in a timely way”. But what they want is contrarian, anti-cyclical ECL provisioning. This is simply not possible, in part because of human psychology but, more importantly, because the standard requires justifiable projections based on objective, consensual evidence.

Surely the authorities know they are wrong? Their arguments don’t stack up.

They hide behind repeated statements that the G20 instructed them to deliver ECL provisioning, whereas a re-read of the actual instructions clearly shows that a procyclical, subjective and complex regime was not what was asked for.

It just doesn’t add up.”

There is of course no going back at this point, so Adrian (rightly I think) argues that the solution lies in a change to banking regulation to make Basel compatible with ECL provisioning. I will quote Adrian at length here

 “So the real target is to change banking regulation, to make Basel compatible with ECL provisioning. Doing this properly would constitute a genuine “Basel V”. Yes, the markets would still need to grapple with complex and misleading IFRS 9 numbers to assess performance. But if the solvency calculation could somehow adjust properly for ECL provisions, then solvency would be stronger and less volatile.

And, in an existential way, solvency is what really matters – it’s the sina qua non  of a bank. Regulatory solvency drives the ability of a bank to grow the business and distribute capital. Accounting profit matters less than the generation of genuinely surplus solvency capital resources.

Basel V should remove or resolve the double count between lifetime ECL provisions and one-year unexpected loss (UL) capital resources. There are many different ways of doing this, for example:

A. Treat “excess provisions” (the difference between one-year ECL and lifetime ECL for Stage 2 loans) as CET1

B. Incorporate expected future margin as a positive asset, offsetting the impact of expected future credit losses

C. Reduce capital requirements by the amount of “excess provisions” (again, the difference between one-year ECL and lifetime ECL for Stage 2 loans) maybe with a floor at zero

D. Reduce minimum regulatory solvency ratios for banks with ECL provisioning (say, replacing the Basel 8% minimum capital ratio requirement to 4%)

All of these seem unpalatable at first sight! To get the right answer, there is a need to conduct a fundamental rethink. Sadly, there is no evidence that this process has started. The last time that there was good thinking on the nature of capital from Basel was some 17 years ago. It’s worth re-reading old papers to remind oneself of the interaction between expected loss, unexpected loss and income.  The Basel capital construct needs to be rebuilt to take into account the drastically different meaning of the new, post-IFRS 9 accounting equity number.”

Hopefully this post will encourage you to read Adrian’s note and to recognise that IFRS 9 is not the cycle mitigating saviour of banking it is represented to be. The core problem is not so much with IFRS9 itself (though its complexity and subjectivity are issues) but more that bank capital requirements are not constructed in a way that compensates for the inherent cyclicality of the banking industry. The ideas that Adrian has listed above are potentially part of the solution as is revisiting the way that the Counter cyclical Capital Buffer is intended to operate.

From the Outside