Responsible lending

The conventional wisdom is that banks are bastards and the discussion of responsible lending in the Royal Commission’s Interim Report seems to offer further evidence to confirm this self-evident truth.

This assessment troubles me. Not so much the banks are bastards bit, that has been a truth of the Australian perception of their banks for as long as I can remember (which is quite a long time). What troubles me is that the Responsible Lending obligation is open to the interpretation that borrowers may be relieved of any responsibility for their own decisions and indeed for telling the truth when applying for a loan.

Christopher Joye wrote an interesting opinion piece on his topic. Joye argues that the Royal Commission has adopted an “inexplicably one-eyed” interpretation of the laws that “projects the impression they are devoid of doubt” when in fact these laws are yet to be clarified by the courts. I have no particular legal insights here on where the truth lies; i.e. is the law as black and white as portrayed in the Commission’s Interim Report or more open to interpretation and clarification as Joye asserts. That I think is something to pay close attention to as the Commission plays out.

I do however share Joye’s concern with the extent to which the Royal Commission interpretation seems to create moral hazard which potentially increases risks and cost for lenders.

I think he possibly overstates the potential impact of the risk transfer on depositors. Australian bank depositors in my view are pretty well insulated from the risk transfer, partly by virtue of the Australian government guarantee of small deposits, but mostly because of the deeply senior position deposits hold in the Australian bank loss hierarchy. Joye’s comments on the extent to which this shifting of the onus of responsibility impacts the cost of bank funding are more concerning.

I am not suggesting that there is anything wrong with laws that help bank customers make more informed choices and otherwise tip the balance of power back in their favour in the interests of a fairer, more competitive market for financial services. I just don’t see that home loan borrowers are as badly done by as is often suggested. Borrowers clearly do struggle with very high house prices in the major cities but that is not the banks’ fault. The process of getting a loan and comparing offers however seems as easy, open and competitive as it has been in my memory.

I think we can probably agree that banks should not take advantage of customers were there is a power imbalance. It is less clear that banks should be held responsible for policing borrower risk appetites, including their willingness to make sacrifices to own a home, by imposing limits that are more strict than what would be required to ensure that the credit risk is sound.

It is entirely possible that I am missing something here, but I can only call it the way I see it. Banks may be bastards but that does not mean that everything they do is wrong.

Tony

“The Origin of Financial Crises” by George Cooper

There are a lot of books on the topic of financial crises but this one, written in 2008, stand the test of time. At the very least, it offers a useful introduction to Minsky’s Financial Instability Hypothesis. There is also an interesting discussion of the alternative approaches adopted by central banks to the problem of financial stability.

George Cooper argues that our financial system is inherently unstable and that this tendency is accentuated by a combination of factors

  • The belief that market forces will tend to produce optimal allocations of capital, and
  • Monetary policy that seeks to smooth (and ideally eliminate) business cycle fluctuations in economic activity

Cooper draws heavily on Hyman Minsky’s Financial Instability Hypothesis (FIH) which he argues offers much better insight into the operation of the financial system than the  the Efficient Market Hypothesis (EMH) which tended to be the more influential driver of economic policy in the years preceding the Global Financial Crisis.

Cooper uses these competing theories to explore what makes prices within financial markets move. The EMH maintains that the forces of supply and demand will cause markets to move towards equilibrium and hence that we must look to external forces to understand unexpected shocks and crises. Minsky’s FIH, in contrast, argues that financial markets can be driven by internal forces into cycles of credit expansion and asset inflation followed by credit contraction and asset deflation.

Cooper identifies the following ways in which financial systems can become unstable

  • Markets characterised by supply constraints tend to experience price inflation which for a period of time can drive further increases in demand
  • Monetary policy which is oriented towards mitigating (and in some cases pre-empting) economic downturns can also amplify market instability (i.e. the Greenspan put makes the market less resilient in the long run)
  • Credit creation by private sector banks contributes to money supply growth; this in turn can facilitate growth in demand but there is no mechanism that automatically makes this growth consistent with the economy’s sustainable growth path

The point about some asset markets being prone to instability is particularly pertinent for banks that focus on residential property lending. Classical economic theory holds that increased prices should lead to increased supply and reduced demand but this simple equilibrium model does not necessarily work for property markets. Property buyers more often reason that they need to meet the market because it will only get more expensive if they wait. Many of them will have already seen this happen and regret not meeting the market price previously as they contemplate paying more to get a property that is not as nice as ones they underbid on. The capacity of home builders to respond to the price signal is frequently constrained by a myriad of factors and there is a long lead time when they do respond.

The argument Cooper makes rings very true for Australia and is very similar to the one that Adair Turner made in his book titled ”Between debt and the devil”. Cooper’s (and Minsky’s) argument that the pursuit of stability is not a desirable objective and that the system benefits from a modest amount of stress is similar to the argument made by Nassim Taleb in “Antifragility”.

Cooper also discusses the different philosophies that central banks bring to the challenge of managing financial stability. The dominant view is one that focuses on the risk that sees the management of inflation risk as a dominant concern while placing greater trust in the capacity of the market to self correct any instability. The European Central Bank, in contrast, seems to have placed less faith in the market and perhaps been closer to Minsky.

Some quotes from the book will give a sense of the ideas being discussed:

“Through its role in asset price cycles and profit generation, credit formation (borrowing money for either consumption or investment) lies at the heart of the financial market’s fundamental instability”.

“Hyman Minsky said that “stability creates instability” referring to our tendency to build up an unsustainable stock of debt in times of plenty only for that debt to then destroy the times of plenty”

“For a system as inherently unstable as the financial markets, we should not seek to achieve perfect stability; arguably it is this objective that has led to today’s problems. A more sustainable strategy would involve permitting, and at times encouraging, greater short-term cyclicality, using smaller, more-frequent downturns to purge the system of excesses”

“Credit creation is the foundation of the wealth-generation process; it is also the cause of financial instability. We should not let the merits of the former blind us to the risks of the latter.”

I have made some more detailed notes on the book here.

Tony

Mortgage risk weights – fact check

It is frequently asserted that the major Australian banks have been “gifted” a substantially lower mortgage risk weight than the smaller banks. To be precise, the assertion is that the major banks are only required to hold capital based on a 25% risk weight versus 39% for smaller banks.

If you are not familiar with the arcane detail of bank capital adequacy, then you could be forgiven for concluding that this differential (small banks apparently required to hold 56% more capital for the same risk) is outrageous and unfair. While the risk weights for big banks are certainly lower on average than those required of small banks, I believe the difference in capital requirements is not as large as the simple comparison of risk weights suggests.

Bank capital requirements involve more than risk weights

To understand why this comparison of risk weights is misleading, it will be helpful to start with a quick primer on bank capital requirements. The topic can be hugely complex but, reduced to its essence, there are three elements that drive the amount of capital a bank holds:

  1. The risk weights applied to its assets
  2. The target capital ratio applied to those risk weighted assets
  3. Any capital deductions required when calculating the capital ratio

Problem 1 – Capital adequacy ratios differ

The comparison of capital requirements based on risk weights implicitly assumes that the regulator applies the same capital ratio requirement to all banks, but this is not the case. Big banks are targeting CET1 ratios based on the 10.5% Unquestionably Strong benchmark set by APRA while there is a greater range of practice amongst the smaller banks. Bendigo and Suncorp appear to be targeting a CET1 ratio in the range of 8.5 to 9.0% while the smaller of the small banks appear to be targeting CET1 ratios materially higher (say 15% or more).

If we confine the comparison to the alleged disadvantage suffered by Bendigo and Suncorp, then the higher risk weights they are required to apply to residential mortgages is substantially offset by the lower CET1 target ratios that they target (the 56% difference in capital required shrinks to something in the order of 30% if you adjust for the difference in target CET1 ratios).

Broadening the comparison to the smaller banks gets even more interesting. At face value the much higher CET1 ratios they appear to target suggest that they are doubly penalised in the required capital comparison but you have to ask why are they targeting such high CET1 ratios. One possible explanation is that the small less diversified mortgage exposures are in fact more risky than the more diversified exposures maintained by their larger competitors.

Problem 2 – You have to include capital deductions

This is quite technical I recognise but, in addition to the capital tied to the risk weight, the big banks are also required to hold capital for a capital deduction linked to the difference between their loan loss provisions and a regulatory capital value called “Regulatory Expected Loss”. This capital deduction increases the effective risk weight. The exact amount varies from bank to bank but I believe it increases the effective capital requirement by 10-12% (I.e. an effective RW closer to 28%). My understanding is that small banks are not required to make the same capital deduction.

Problem 3 – The Standardised risk weights for residential mortgages seem set to change

A complete discussion of the RW difference should also take account of the fact that APRA has proposed to introduce lower RW Categories for the smaller banks such their average RW may be lower than 39% in the future. I don’t know what the average RW for small banks would be under these new RW but that is a question you could put to the banks who use the 39% figure without acknowledging this fact.

Problem 4 – The risk of a mortgage depends on the portfolio not the individual loan

The statement that a loan is the same risk irrespective of whether it is written by a big bank or small bank sounds intuitively logical but is not correct. The risk of a loan can only be understood when it is considered as part of the portfolio the bank holds. Small banks will typically be less diversified than a big bank.

Problem 5 – What about the capital required for Interest Rate Risk in the Banking Book (IRRBB)?

I don’t have sufficient data to assess how significant this is, but intuitively I would expect that the capital that the major banks are required to hold for IRRBB will further narrow the effective difference between the risk weights applied to residential mortgages.

Summing up

My aim in this post was not to defend the big banks but rather to try to contribute some of the knowledge I have acquired working in this area to what I think is an important but misunderstood question. In the interests of full disclosure, I have worked for one of the large Australian banks and may continue to do work for them in the future.

On a pure risk basis, it seems to me that the loan portfolio of a large bank will tend to be more diversified, and hence lower risk, than that of a smaller bank. It is not a “gift” for risk weights to reflect this.

There is a legitimate debate to be had regarding whether small banks should be given (gifted?) an advantage that helps them compete against the big banks. That debate however should start with a proper understanding of the facts about how much advantage the large banks really have and the extent to which their lower risk weights reflect lower risk.

If you disagree tell me what I am missing …

The secret to growth may be how your economy deals with downside risk

Andrew Haldane’s work is always worth reading and a recent speech he made titled “Ideas and Institutions – A Growth Story” is no exception.

Haldane’s revised story of growth

Haldane starts with the observation that GDP growth matters and not just for narrow materialistic reasons.

“It is now pretty well-established that growth is a vital ingredient, indeed pre-requisite, for meeting many of the broader societal objectives many would view as important to our longer-term health, wealth and happiness. While not an end in itself, economic growth appears to be a vitally important means of achieving those societal ends.”

If you accept this premise then understanding how to foster growth matters. One of the clues has been to look at what drove the step change in the pattern of cumulative long term GDP growth that started around 1750. Haldane notes that the traditional explanation was that there had been a happy confluence of ideas and innovation starting with the Industrial Revolution that resulted in growth taking off.

Haldane recounts that he subscribed to this story of growth but two facts prompted him to reconsider.

“I thought I understood the story of economic growth, its drivers and determinants. But recently I have changed my mind. I have a new story of growth.”

Fact number one is that innovation did not just start in 1750 …

”… waves of innovation, big and small, have been lapping the shores of society for the entirety of human civilisation… while ideas and innovation may well be a necessary condition for economic growth, the historical record suggests they may not themselves have been sufficient. Other forces appear to have been at play, translating these ideas into sustained growth in living standards.”

Fact number two was analysis of long term growth provided by economic historians, Steve Broadberry and John Wallis.

Indeed, Haldane states it was probably this new perspective, above all others, that led him to change his own story about growth. This “fact” must be treated with caution given the challenge of seeking to collate a picture of GDP growth over the past 1000 years. However, it does offer an intriguing and plausible (to me at least) new perspective on one of the keys to sustaining the kinds of growth that underpin modern standards of living.

Broadberry and Wallis found that, even prior to the Industrial Revolution, economies experienced notable periods of strongly positive growth. This has continued post the Industrial Revolution (albeit a bit less than growth during expansion periods prior to the Industrial Revolution). What changed post the Industrial Revolution was a dramatic fall in both the probability and cost of GDP contractions.

“Since 1750, recessions have become far less frequent and less painful. It is the avoidance of deep recessions that differentiates the Golden Era from its Malthusian predecessor.”

The alternative story Haldane finds persuasive is that the real game changer altering the long term cumulative growth trajectory was just as much an “Institutional Revolution” as an Industrial Revolution. Innovation in the institutional structures of the economy reduced the incidence and severity of recessions but also helped to mitigate the social impact of the disruption associated with the technological change that was driving increases in productivity.

“… the story that better fits the facts appears to be one in which the conveyor belt of ideas and innovation has been continuous over the centuries, causing lengthy if lumpy ideas-fuelled expansions. But whereas prior to the Industrial Revolution this conveyor belt was regularly halted by recessions, more recently these interruptions have been far fewer and less costly.

Put differently, the real revolution in living standards after 1750 came about not exclusively, or perhaps even mainly, from the surge in ideas and technologies. Rather, it resulted from societies having found some means of avoiding the subsequent recessionary bullets. Prior to the Industrial Revolution, these killed expansions dead. After it, societies appear to have found some effective means of dodging them.”

What exactly is the Institutional Revolution that Haldane sees playing a key role in having facilitating sustainable growth

Haldane offers a definition by Douglass North as a good starting point:

Institutions are “…. humanly devised constraints that structure political, economic and social interactions”. So defined, institutions are social infrastructure. They include formal or legal institutions, like Parliaments, judiciaries, central banks, social safety nets and schools. But they also include less formal associations and groups, such as universities, trade unions, guilds and charities.”

The Industrial Revolution resulted in new forms of physical capital but the Institutional Revolution is reflected in a broader set of “capitals” not just physical capital (plant and machines) but human (skills and expertise), intellectual (ideas and technologies), infrastructural (transport and legal systems), social (co-operation and trust) and institutional (national and civic, private and public) capital.

History suggests each of these capitals may have played an important supporting role in the story of growth. Ideas alone, without the support of one or more of these broader capitals, have historically run aground. For example, in the UK many of the foundations for growth after the Industrial Revolution were laid in the centuries preceding it. It was on this platform of “capitals”, plural, that ideas and innovation then built.

What flows from this revised story of growth

Haldane identified a number of institutions but this speech is addressed to a group of university students so he naturally focuses on the role universities can play in managing the downside associated with the current wave of technological innovation.

“The story of growth is a story with two “i”s – ideas and institutions. The Fourth Industrial Revolution will expand the range of ideas, perhaps more than any of its predecessors. It may also expand the range of workers who suffer its side-effects, perhaps more so than any of its predecessors. In the past, new institutions have emerged to cushion this painful transition, limiting the recessionary hit to societies.

Historically, doing so appears to have held the key to sustainable growth. If this time’s technological transition is as great as any previously, securing sustainable growth will need new institutions to manage this transition and mitigate its societal side-effects. I have speculated on one area where that next institutional wave might usefully break – universities like this one, as new centres of lifelong learning and technological diffusion. In future, institutional innovation will be every bit as important as technological innovation if that gift of growth is to keep on giving.”

“What experience since the Industrial Revolution has taught us is that this risk can be mitigated by an appropriate institutional response. To be effective in curbing recession risk, that response should have as its objectives, first, speeding-up the process of reskilling by workers (“enabling”) and, second, cushioning the impact of new technologies on displaced companies and their workers (“insuring”).

If true, this alternative explanation for why the growth trajectory changed carries important implications for understanding the future challenges of technology and for devising the future policies and institutions necessary to meet them”

Loan loss provisioning gets more procyclical

I know we are expected to believe Expected Loss Loan provisioning (IFRS9 / CECL) will make the banking system less procyclical. I very much doubt that is true and expect it will, if anything, be more rather than less procyclical.

I recently flagged a note by Adrian Docherty that set out why this is likely to be so. Now I offer Tony Hughes of Moody’s Analytics who wrote a good piece here explaining why this new approach to loan loss provisions will in fact add to procyclicality. It is a quick read but worth the effort.

Tony

Capital adequacy – an option to add transparency and flexibility into the “Unquestionably Strong” mix

Two of my earlier posts (here and here) discussed the potential to improve the capital adequacy framework by revisiting the calibration and operation of regulatory capital buffers. Some of the issues discussed in those posts are touched on in a discussion paper APRA has released titled “Improving the transparency, comparability and flexibility of the ADI capital framework“.

APRA is open to alternatives but sets out two options for discussion

In APRA’s words, the DP outlines

“… options to modify the ADI capital framework to improve transparency and comparability of reported capital ratios. The main conceptual approaches APRA is considering and seeking feedback on are:

  • developing more consistent disclosures without modifying the underlying capital framework; and

  • modifying the capital framework by adjusting the methodology for calculating capital ratios.”

The First Approach– “Consistent disclosure” – seems to be a beefed up version of the status quo in which APRA gets more directly involved in the comparability process by adding its imprimatur to the internationally harmonised ratios some Australian banks currently choose to disclose as an additional informal measure of capital strength.

“Under this approach, ADIs would continue to determine regulatory capital ratios using APRA’s definitions of capital and RWA. However, APRA would also specify a methodology for ADIs to determine certain adjustments to capital and RWA that could be used for disclosure (Pillar 3) purposes. As noted above, the methodology would focus on aspects of relative conservatism that are material in size and able to be calculated simply and objectively.”

APRA argues that “The supplementary disclosure would allow all stakeholders to better assess the capital strength of an ADI on a more comparable basis. However, it would result in two APRA-endorsed capital ratios: an APRA regulatory capital ratio to be compared against minimum requirements, and an additional disclosure-only capital ratio for, in particular, international comparison.”

A Second Approach – “Capital ratio adjustments” would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA.

The DP explains that this “… alternative approach would involve APRA modifying the calculation of regulatory capital ratios to utilise more internationally harmonised definitions of capital and RWA. This would involve removing certain aspects of relative conservatism from ADIs’ capital ratio calculations and lifting minimum regulatory capital ratio requirements in tandem. This increase in regulatory capital ratio requirements could be in the form of a transparent adjustment to minimum capital ratio requirements—for the purposes of this paper, such an adjustment is termed the ‘APRA Overlay Adjustment’.”

“To maintain overall capital adequacy, the APRA Overlay Adjustment would need to be calculated such that the total dollar amount of Prudential Capital Requirement (PCR) and Capital Conservation Buffer (CCB) would be the same as that required if these measures were not adopted. In other words, the risk-based capital requirements of ADIs would be unchanged in absolute dollar terms, maintaining financial safety, but adjustments to the numerator and the denominator of the capital ratio to be more internationally comparable would increase reported capital ratios.”

APRA clarify that

“These options are not mutually exclusive, and there is potential for both approaches to be adopted and applied in different areas.”

Approach 2 has some clear advantages …

It would make the “unquestionably strong” capital ratios more directly comparable with international peers, thereby reducing the potential for the perception of this strength to be obscured or misunderstood.

“Perception” is the key word here. It matters that the strength of Australian banks is simple, transparent and evident rather than being something where the perceivers must understand a sequence of relatively obscure and complex adjustments to fully appreciate the strength of a bank’s capital. More importantly perception matters most when the system is under stress and people do not have the time, or the inclination, to look beyond the reported numbers.

The adjusted capital ratio approach also provides opportunity to increase the flexibility of the ADI capital framework in times of stress but only to the extent to which the Overlay Adjustment is applied to the capital buffer, rather than increasing the minimum capital requirements. Higher minimum requirements would do nothing to enhance flexibility and may even be a backward step.

I believe a non zero baseline for the CCyB would also enhance the flexibility of the capital framework by virtue of the fact that it improves the odds that the banks (and APRA) have a flexible buffer in place before it is needed. This opportunity for enhanced flexibility is an option under both approaches so long as the Unquestionably Strong Benchmark maintains a material surplus over the Capital Conservation Buffer as it currently does.

But also some challenges …

APRA notes that the Adjusted Capital Ratio approach:

  • May significantly increase operational complexity for ADIs by virtue of the fact that the application of the APRA Overlay Adjustment would result in variable capital ratio requirements,

• Potentially results in variable minimum capital requirements which introduces complexity in analysing capital buffers and may undermine the desired transparency, and

• Reduces the dollar value of the 5.125 per cent (of RWA) loss absorption trigger point.

Do the advantages of the Adjusted Capital Ratio approach outweigh the challenges?

The short answer, I think, is yes … albeit with some qualifications.

So far as I can see, the added complexity only enters the discussion to the extent that some of the APRA Overlay Adjustment is applied to increase the minimum capital requirement. Most, if not all, of the operational complexity is avoided if the “Overlay Adjustment” is confined to increasing the size of the capital buffer.

Conversely, the benefits of increased responsiveness (or risk sensitivity) and flexibility lie in an increased capital buffer.

It follows then that the best way to pursue this approach is for any harmonised adjustments to the reported capital ratio to be confined to a higher CCB. This begs the question whether all the Overlay Adjustment should be applied to the capital buffer. I address that question in my responses below to some of the questions APRA has posed to solicit industry feedback.

One issue not covered in the Discussion Paper in any detail is that the capital ratios under Approach 2 will be more sensitive to any changes in the numerator. This is a simple mathematical consequence of RWA being lower if more harmonised measures are adopted. I do not see this as a problem but the heightened risk sensitivity of the framework needs to be clearly understood beforehand to minimise the potential for larger changes in capital ratios to be misunderstood. A more risk sensitive capital ratio may even be an advantage. This may not be obvious but there is a body of research which suggests a more responsive, more volatile, measure of capital adequacy can be beneficial to the extent that it prompts greater risk awareness on the part of bank management and external stakeholders. Greg Ip’s book “Foolproof” offers an introduction to some of this research but a simple example illustrating the point is the way that the benefits of improved braking in modern cars is offset to some extent by people driving faster.

APRA concludes its discussion paper with some direct questions.

There are 8 questions in all but in the context of this post I will have a go at addressing 3 of them, questions 2, 7 and 8.

Question 2: If APRA were to apply a combination of Approach 1 and Approach 2, which aspects of relative conservatism are best suited to be treated under Approach 2?

If you accept the argument that the minimum capital requirement should continue to be a set value (i.e. not subject to periodic adjustment), then the aspects of relative conservatism best suited to Approach 2 are those which can reasonably be assigned to an increase in, and regular adjustment of, one or more of the capital buffers.

Running through the list of adjustments currently applied to generate the internationally harmonised capital ratios, we can distinguish three broad categories of APRA conservatism:

  1. The extra credit risk related losses a bank might expect to experience under a very severe recession or financial crisis style scenario but not necessarily a gone concern where losses extend into the tail of the loss distribution
  2. Assets whose value depends on the ADI being a going concern and consequently are less certain to be realisable if the bank is in liquidation or has otherwise reached a point of non-viability
  3. Capital deduction intended to avoid “double counting” capital invested outside the ADI include

There are very few areas of black and white in the response to this question, but the first group are the items of APRA conservatism that I think have the clearest claim to be included in the capital buffer. These reflect potential loss scenarios that are conservative but still within the domain of plausibly severe downturns in the business cycle; this would encompass the following capital ratio adjustments:

  • the 20 per cent loss given default (LGD) portfolio constraint required for residential mortgage exposures;
  • the LGD parameter for unsecured non-retail exposures;
  • credit conversion factors (CCFs) for undrawn non-retail commitments;
  • use of supervisory slotting and the scaling factor for specialised lending;
  • risk weights for other retail exposures covered by the standardised approach to credit risk; and
  • the exchange rate used to convert Euro-denominated thresholds in the Basel capital framework into Australian dollars.

The second category are assets which have a value if the bank is a going concern but cannot necessarily be relied upon in non viability scenarios; I.e.

  • deferred tax assets arising from timing differences;
  • capitalised expenses and transaction costs
  • the capital requirement applied by APRA for IRRBB (I am open to arguments that I am being too harsh on IRRBB)

The third category captures capital that is committed to risks where the bank is taking a first loss exposure including

  • investments in other financial institutions;
  • holdings of subordinated tranches of securitisations.
  • investments in commercial entities;

Another way to explore this question is to map these three categories to the traditional graphic expression of a bank loss distribution and establish whether they are expected to lie:

  • closer to the middle of the loss distribution (sometimes framed as a 1 in 25 year downturn or the kinds of losses we expect in a severe downturn)
  • Or closer to the “tail” of the loss distribution (typically expressed as a 1 in 1000 year loss in regulatory capital terms).

To be clear, I am not seeking to ascribe any level of precision to these statistical probabilities; simply to distinguish between the relative likelihood of the items of conservatism that APRA has embedded in its current measure of capital adequacy. These three items tend to be treated as equivalent under the current approach and enhanced disclosure per Approach 1 will do nothing to address this conflation of risks.

Question 7: Would increasing the size of capital buffers (either by increasing the CCB or by setting a non-zero baseline CCyB) relative to PCR appropriately balance capital strength with financial stability through the cycle?

I have advocated the benefits of a non zero baseline CCYB in previous posts. One of these posts focused on the approach adopted by the Bank of England where I identified two advantages.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable counter cyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The BOE approach still monitors a range of core indicators for the CCyB grouped into three categories.

• The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.

• The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.

• The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the BOE implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the standard BCBS approach. The BOE might still miss the turning point but it has a head start on the problem if it does.

The BOE also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

I have discussed the BOE approach at length but the Canadian supervisor has also introduced some interesting innovations in the way that it uses a capital buffer to address the systemic risk of large banks that are worth considering as part of this review.

The second reason I favour a non zero baseline is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate.

In theory, the standard capital conservation buffer (CCB) introduced under Basel III can absorb any unexpected increase in losses and allow banks the time to progressively rebuild the buffer when economic conditions improve

In practice, the upper boundary of the CCB acts as a de facto minimum requirement such that banks face strong market pressure to immediately rebuild the buffer potentially at material cost to shareholders

There are no guarantees for what happens to banking systems under stress, but a flexible buffer that is sensitive to the state of the credit cycle is I think far more fit for purpose.

It is important to note that a non zero CCYB is an option under both approaches. There is potentially enough surplus capital in the Unquestionably Strong calibration for a non-zero CCYB to be introduced without requiring banks to raise any more capital. This would be so under either of the approaches that APRA has outlined.

So a larger buffer would be desirable from the perspective of increased comparability and transparency but the advantages of a non zero CCYB could also be pursued under the Unquestionably Strong status quo or Approach 1.

Question 8: What may be some of the potential impacts if APRA increases the prescribed loss absorption trigger point above 5.125 per cent of RWA?

The rationale for increasing the PONV Trigger is that APRA believes it is important to preserve the value of the trigger in dollar terms.

I can see that it is important to have a PONV trigger well before a bank reaches the point of insolvency (I.e. where liabilities exceed assets).

It is less clear that the reduction in the dollar value of the trigger point is sufficiently material to matter.

What really matters is the amount of contingent capital available to be converted into common equity if the PONV conversion trigger is pulled.

In the absence of this source of new capital, the fact that a bank has X billion dollars more or less of book equity (according to the financial accounts) at the point of deemed non-viability is arguably irrelevant to whether it remains a going concern.

I am also pretty sure that we do not want the operational complexity associated with a PONV trigger that moves around over time as a result of seeking to compensate for the impact of the Overlay Adjustment on capital deductions and RWA.

Do Australian banks really earn a 40% ROE on mortgage lending?

This claim features prominently in the Productivity Commission’s report on “Competition in the Australian Financial System” but the only source I can find is a newspaper article quoting a Morgan Stanley report. Mortgage lending is clearly amongst the more profitable areas of Australian banking but I would be surprised if the real return was as high as the Productivity Commission appears to believe. It is difficult to challenge the number given the absence of detail offered by the Productivity Commission but here is my checklist of things which I think the calculation is missing:

  • I suspect the capital part of the ROE calculation is based on a 25% RW but the actual capital an IRB bank is required to hold is higher than this because they also face a capital (CET1) deduction for any shortfall between their loan loss provision and the Regulatory Expected Loss (REL) assigned to these loans. This REL deduction probably adds another 3 percentage points to the effective RW an IRB bank applies
  • The 40% ROE is also I suspect based on the current point in time loan loss allowance (say circa 2-4bp per annum). Banks would dearly love to live in a world where the expected loan loss was always this low but the real return on equity for these loans should take account of what will happen to loan losses when the economic cycle turns down. Exactly how bad this loss rate will be is anyone’s guess but I would not be surprised if the loss rates were at least 5 times higher than the current rates and even more if we encounter another financial crisis.
  • It is also not clear to me what kind of cost to income ratio is being factored into the calculation
  • Charges for liquidity costs associated with funding these assets also not clear
  • Finally we have the fact that the overall ROE for banks is nowhere near 40% but residential mortgages dominate their balance sheets.

Tell me what I am missing but something does not add up …

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Stress Testing – Do (really) bad things happen to good banks?

This post will focus on stress testing in response to some recent papers the RBNZ released (July 2018) describing both its approach to stress testing and the outcomes from the 2017 stress test of the major banks and a speech by Wayne Byres (APRA) which offered some detail of the Australian side of the joint stress testing undertaken by APRA and the RBNZ. I intend to make some observations related to this specific stress testing exercise but also some broader points about the ways that stress testing is currently conducted. The overriding point is that the cyclical scenarios employed to calibrate capital buffers seem to focus on “what” happened with less consideration given to “why” the historical episodes of financial stress the scenarios mimic were so severe.

There will be technical detail in this post but the question, simply put, is to what extent do really bad things happen to good banking systems? Paraphrased in more technical language, are we calibrating for scenarios based on the impact of some random exogenous shock on a sound banking system, or does the scenario implicitly assume some systemic endogenous factors at play that made the financial system less resilient in the lead up to the shock? Endogenous factors may be embedded in the balance sheets of the banks (e.g. poor credit quality amplified by excessive credit growth) or perhaps they are a feature of the economic system (e.g. a fixed exchange rate regime such as confronted many European economies during the GFC) that may or may not be universally relevant. I am focusing on the RBNZ stress test to explore these points mostly because they offered the most detail but I believe their approach is very similar to APRA’s and the observations apply generally to macro economic stress testing.

No prizes for guessing that I will be arguing that the kinds of really severe downturns typically used to calibrate capital buffers are usually associated with conditions where endogenous forces within the banking system are a key element in explaining the extent of the asset price declines and weak recoveries and that the severity of some historical scenarios was arguably exacerbated by unhelpful exchange rate, monetary or fiscal policy settings. This is not to say that we should not be using very severe downturns to calibrate the resilience of capital buffers. My argument is simply that recognising this factor will help make more sense of how to reconcile the supervisory approach with internal stress testing and how best to respond to the consequences of such scenarios.

The RBNZ approach to stress testing

The RBNZ characterises its approach to be at the less intensive end of the spectrum of supervisory practice so “stress tests are used to provide insights into the adequacy of bank capital buffers and can highlight vulnerabilities at the bank wide level or in its various loan portfolios” but “… the use of individual bank results in setting capital buffers and promoting market discipline is relatively limited“. The RBNZ stress tests fall into three categories 1)  cyclical scenarios, 2) exploratory stress tests and 3) targeted tests.

This post will focus on the cyclical scenario which was the focus of the RBNZ’s 2017 stress test and the place where the question of what happened and why it happened is most at risk of getting lost amongst the desire to make the test tough, coupled with the often daunting task of just running the test and getting some results.

The RBNZ states that the aim of a cyclical scenario is to help “… understand the resilience of participating banks to a macroeconomic downturn” so these scenarios “… mimic some of the worst downturns in advanced economies since World War 2, and typically feature sharp declines in economic activity and property prices, and stressed funding markets”. The repetition of the benchmark cyclical downturn scenario over time also allows the RBNZ “to track the resilience of the financial system over time (although the scenario will 

It is hard to argue with calibrating the resilience of the banking system to a very high standard of safety. That said, the concern I have with cyclical scenarios drawn from worst case historical events is that the approach tends to skip over the question of why the downturn of such severity occurred.

The RBNZ commentary does recognise the “… need to take account of the nature of the specific stress scenario” and for the cyclical scenario to “evolve based on new research and insights, such as the extent of over-valuation evident in property markets” and the possibility that “domestic monetary policy and a falling exchange rate would provide a significant buffer … that was unavailable during many of these stress episodes in countries without floating exchange rates“. “Exploratory” and “Targeted” stress testing may also be focussed on the endogenous risks embedded in the banking system without explicitly using that terminology.

So if the RBNZ, and APRA, are implicitly aware of the endogenous/exogenous risk distinction, then maybe I am just being pedantic but I would argue that greater clarity on this aspect of stress testing helps in a number of areas:

  • It can help to explain why there is often a gap between:The severity of outcomes modelled internally (where the bank will probably assume their portfolios has robust credit quality and none of the systemic weaknesses that were responsible for past episodes of severe financial weakness implicit in the downturn scenario adopted by the supervisors), andThe severity the regulator expects (possibly based on a skeptical view of the extent to which bank management has balanced risk and return with the reward of higher growth and market share).
  • The types of recovery actions that can be deployed and the amounts of capital they contribute to the rebuilding process are also very much shaped by the nature of the scenario (scenarios shaped by endogenous factors embedded in the banks’ balance sheets or business models require much more substantial responses that are more costly though the cost can be a secondary issue when the scale of the challenge is so large).
  • Supervisors rightly focus on the need for banks to maintain the supply of credit to the economy but endogenous scenarios may actually require that some customers de-gear themselves and become less reliant on bank credit.

The RBNZ discussion of the 2017 stress test of the major banks focussed on the Phase 2 results and noted that:

  • The four participating banks started the stress test with an aggregate CET1 ratio of 10.3% and an aggregate buffer ratio of 5.4%
  • The impact of the combined macro economic downturn and the operational risk event saw the aggregate CET1 ratio decline by 3.4 percentage points to 6.9% in the third year; driven in order of importance by:
    • Credit losses (including the large counter party loss) – 6.6 ppts
    • Growth in RWA – 1.4 ppts
    • Dividends and other capital deductions – 1.4 ppts
    • The operational risk event for misconduct risk – 0.7 ppts
    • Underlying profits which offset the gross decline in the CET1 ratio by 6.7 ppts to arrive at the net decline of 3.4 ppts
  • Mitigating actions improved the aggregate CET1 ratio by 1.1 ppts by year three to 8%; these actions included 1) reductions in lending, 2) additional interest rate repricing and 3) operating expense reductions.

There is not a lot of detail on individual bank outcomes. In the combined scenario, individual bank CET1 ratios declined to between 6.4% to 7.4% versus the 6.9% aggregate result. The individual buffer ratios fell to between 1.2 and 1.4% at their low points (no aggregate minimum buffer was reported).

Some observations on the outcomes of the RBNZ 2017 stress test

The fact that the major banks can maintain significant buffers above minimum capital requirements during quite severe stress scenarios offers a degree of comfort, especially when you factor in the absence of mitigating responses. Minor quibble here, but it is worth noting that the aggregate data the RBNZ uses to discuss the stress testing results does not map neatly to the minimum requirements and capital buffers applied at the individual bank level. A 5.4 ppt buffer over the 4.5% CET1 buffer equates to 9.9%, not 10.3%. Looking at Figure 1 in the “outcomes” paper also shows that there was a narrower range in the CCB at its low point than there was for the CET1 ratio so part of the CCB decline observed in the stress test may be attributable to shortfalls at either the Tier 1 or Total Capital Ratio levels rather than CET1. Small point, but it does matter when interpreting what the results mean for the target capital structure and how to respond.

The RBNZ is clearly correct to question the reliability of mitigating actions and the potential for some actions, such as tightening of lending standards, to generate negative feedback effects on asset prices and economic activity. However, it is equally open to question whether the market confidence that banks rely on to fund themselves and otherwise conduct business would remain resilient in the face of a three-year continuous decline in capital ratios. So I do not think we can take too much confidence in the pre mitigation outcomes alone; the mitigating responses matter just as much.

I have always thought of the capital buffer as simply “buying time” for management to recognise the problem and craft a response that addresses the core problems in the business while creating positive momentum in capital formation. The critical question in stress testing is how much time will the markets grant before they start to hold back from dealing with your bank. Markets do not necessarily expect a magic bullet, but they do expect to see positive momentum and a coherent narrative.  It would also be useful to distinguish between a core set of actions that could reasonably be relied on and other actions that are less reliable or come at a higher cost to the business.

It is hard to comment on the specific mitigating actions since the paper only reports an aggregate benefit of 1.1 ppts over the 3 years but I can make the following general observations:

  • Reductions in lending: The potential for reduced lending to generate negative feedback effects on asset prices and economic activity is a very valid concern but I do struggle to reconcile a 35% decline in house prices with a scenario in which the loans the banking system has outstanding to this sector do not appear to have declined.
    • I can’t see any specific numbers in the RBNZ paper but that is the inference I draw if the overall loan book has not declined, which seems to be implied by the statement that the 20% RWA growth over the first three years of the scenario was primarily due to higher risk weights.
    • Loan principal is progressively being repaid on performing loans but this balance sheet shrinkage is amplified in the scenario by elevated defaults, while the rate of new lending which would otherwise be the driver of growth in outstanding must be slowing if house prices are falling by such a large amount. In addition, the reduced volume of new loans being written are I assume for lower amounts than was the case prior to the decline in house prices.
    • I am very happy to be set straight on this part of the modelling but the numbers don’t quite add up for me. If I am right then a loan book that is stable or even declining in value may be what is implied by the scenario rather than something that adds further to the stress on capital ratios. At the very least, winding back loan growth assumptions relative to the benign base case seems a reasonable response.
  • Repricing: I can’t tell from the RBNZ paper how significant this factor was in contributing to the 1.1 percentage point 3 year improvement in CET1 but I am guessing it was reasonably material. Materiality therefore requires that the numbers be subject to a higher level of scrutiny.
    • History does offer a reasonable body of evidence that Australian and NZ banks have had the capacity to reprice loans under stress and in response to higher funding costs. The question is whether the collapse in trust in big banks has undermined the value of the repricing option they have traditionally benefited from.
    • I do believe that some of the critiques of bank repricing are not well founded but that does not change the real politic of the likely public and government push back should banks attempt to do so.
    • So the answer here is probably yes; the benefits of this particular mitigating action are likely not as reliable as they have been in the past. At the very least, there is likely to be a higher cost to using them.
  • The contribution of RWA growth to the decline in the capital ratio noted in the RBNZ paper is also worth calling out. There is not a lot of detail in the paper but it does appear that the 20% increase in RWA over the first three years of the scenario was driven primarily by an increase in the average credit RW from 45% to 54%.
    • This seems to imply that there was a significant cycle driven increase in capital requirements over the course of the scenario that was not driven by an increase in loans outstanding.
    • I believe that this kind of capital measurement driven impact on capital ratios is fundamentally different from the impact of actual losses and higher new lending but it is treated as equivalent for the purposes of the analysis. This looks to me like a category error; a decline in a capital ratio due to higher risk weights is not the same thing for the purposes of solvency as a loss due to a loan defaulting.
    • The solution probably lies in a better designed approach to counter cyclical buffers (see my post here and here for background) and the regulatory treatment of expected loss, but the stress testing analysis suffers by simply noting the outcome without going behind what that component of the decline in capital ratio actually represents.

Deposit growth under a stress scenario

I also struggled with the statement in Section 5 of the RBNZ paper that “Banks expected strong growth in retail deposits, in line with their experience during the Global Financial Crisis.

  • This statement seems to reflect the intuitive view that bank deposits increase under adverse conditions as people sell risky assets and put their money in banks. But we also know that selling a risky asset requires someone else to buy it, so the increase in cash in the account of the seller is offset by the decrease in the account of the buyer. There was an increase in bank deposits during the GFC but the simple sell risky assets and put your money in the bank does not seem to explain why it happened.
  • So what do we know about the GFC? Firstly, big banks continued to grow their loan book and we know that bank credit creation leads to deposit creation. The GFC was also a scenario where the collapse of securitisation markets saw lending for residential mortgages migrate back to big bank balance sheets. I think this also creates a net increase in deposits. Banks were also paying down foreign borrowings which I think is also positive for deposit creation via the balance of payments though this channel is murkier. We also observed money migrating from equities to property lending. The selling of the risky assets is net square for deposits by itself but the deposit creation comes as the cash in the hands of the seller gets leveraged up to support new credit creation via the increased property loans which are typically geared much more highly than other types of risk assets. The shift from equity to property also seems to be driven by the typical monetary policy strategy of reducing interest rates.
  • So it is not clear to me that the pool of deposits grows under the conditions of the RBNZ scenario. We do have the likelihood that people are selling risky assets but we seem to be missing a number of the elements specific to the GFC that saw new deposits get created in the banking system. The only deposit formation positive I can see is maybe via the balance of payments but, as noted above this, channel is very murky and hard to understand.
  • The other interesting question is whether bank deposits continue to be a safe haven for New Zealanders in future crises given that the RBNZ has implemented an Open Banking Resolution regime that exposes bank deposits to the risk of being bailed-in on a pari passu basis with other unsecured bank creditors. This is a unique feature of the NZ financial system which even eschews the limited guarantees of bank deposits that many other systems see as essential to maintaining the confidence of depositors under stress.

I may well be missing something here so I am very happy to hear the other side to any of the observations I have offered above. I am big believer in the value of stress testing which is why I think it is so important to get it right.

Tony

Minsky’s Financial Instability Hypothesis – Applications in Stress Testing?

One of the issues that we keep coming back to in stress testing is whether, 1) the financial system is inherently prone to instability and crisis or 2) the system naturally tends towards equilibrium and instability is due to external shocks. Any stress scenario that we design, or that we are asked to model, will fall somewhere along this spectrum though I suspect most scenarios tend to be based on exogenous shocks. This touches on a long standing area of economic debate and hence not something that we can expect to resolve any time soon. I think it however useful to consider the question when conducting stress testing and evaluate the outcomes.

From roughly the early 1980’s until the GFC in 2008, the dominant economic paradigm has arguably been that market forces, coupled with monetary and fiscal policy built on a sound understanding of how the economy works, meant that the business cycle was dead and that the primary challenge of policy was to engineer efficient capital allocations that maximised growth. The GFC obviously highlighted shortcomings with the conventional economic approach and drew attention to an alternative approach developed by Hyman Minsky which he labelled the Financial Instability Hypothesis.

Minsky’s Financial Instability Hypothesis (FIH)

Minsky focused on borrowing and lending with varying margins of safety as a fundamental property of all capitalist economies and identified three forms

  • “Hedge” financing under which cash flow covers the repayment of principal and interest
  • “Speculative” financing under which cash flow covers interest but the principal repayments must be continually refinanced
  • “Ponzi” financing under which cash flow is insufficient to cover either interest or principal and the borrower is betting that appreciation in the value of the asset being financed will be sufficient to repay loan principal plus capitalised interest and generate a profit

The terms that Minsky uses do not strictly conform to modern usage but his basic idea is clear; increasingly speculative lending tends to be associated with increasing fragility of borrowers and the financial system as a whole. Ponzi financing is particularly problematic because the system is vulnerable to external shocks that can result in restricted access to finance or which cause asset devaluation cycle as borrowers to sell their assets in order to reduce their leverage. The downward pressure on assets prices associated with the deleveraging process then puts further pressure on the capacity to repay the loans and so on.

The term “Minsky moment” has been used to describe the inflexion point where debt levels become unsustainable and asset prices fall as investors seek to deleverage. Investor psychology is obviously one of the primary drivers in this three stage cycle; investor optimism translates to a willingness to borrow and to pay more for assets, the higher asset valuations in turn allow lenders to lend more against set loan to valuation caps. Lenders can also be caught up in the mood of optimism and take on more risk (e.g. via higher Loan Valuation Ratio limits or higher debt service coverage ratios). Minsky stated that “the fundamental assertion of the financial instability hypothesis is that the financial structure evolves from being robust to being fragile over a period in which the economy does well” (Financial Crises: Systemic or Idiosyncratic by Hyman Minsky, April 1991, p16).

It should also be noted that a Minsky moment does not require an external shock, a simple change in investor outlook or risk tolerance could be sufficient to trigger the reversal. Minsky observed that the tendency of the endogenous process he described to lead to systemic fragility and instability is constrained by institutions and interventions that he described as “thwarting systems” (“Market Processes and Thwarting Systems” by P. Ferri and H. Minsky, November 1991, p2). However Minsky’s FIH also assumes that there is a longer term cycle in which these constraints are gradually wound back allowing more and more risk to accumulate in the system over successive business cycles.

What Minsky describes is similar to the idea of a long term “financial cycle” (25 years plus) being distinct from the shorter duration “business cycle” (typically 7-10 years) – refer this post “The financial cycle and macroeconomics: What have we learnt?” for more detail. An important feature of this longer term financial cycle is a process that gradually transforms the business institutions, decision-making conventions, and structures of market governance, including regulation, which contribute to the stability of capitalist economies.

The transformation process can be broken down into two components

  1. winding back of regulation and
  2. increased risk taking

which in combination increase both the supply of and demand for risk. The process of regulatory relaxation can take a number of forms:

  • One dimension is regulatory capture; whereby the institutions designed to regulate and reduce excessive risk-taking are captured and weakened
  • A second dimension is regulatory relapse; reduced regulation may be justified on the rationale that things are changed and regulation is no longer needed but there is often an ideological foundation typically based on economic theory (e.g. the “Great Moderation” or market discipline underpinning self-regulation).
  • A third dimension is regulatory escape; whereby the supply of risk is increased through financial innovation that escapes the regulatory net because the new financial products and practices were not conceived of when existing regulation was written.

Borrowers also take on more risk for a variety of reasons:

  • First, financial innovation provides new products that allow borrowers to take on more debt or which embed higher leverage inside the same nominal value of debt.
  • Second, market participants are also subject to gradual memory loss that increases their willingness to take on risk

The changing taste for risk is also evident in cultural developments which can help explain the propensity for investors to buy shares or property. A greater proportion of the population currently invest in shares than was the case for their parents or grandparents. These individual investors are actively engaged in share investing in a way that would be unimaginable for the generations that preceded them. Owning your own home and ideally an investment property as well is an important objective for many Australians but less important in say Germany.

These changes in risk appetite can also weaken market discipline based constraints against excessive risk-taking. A book titled “The Origin of Financial Crises” by George Cooper (April 2008) is worth reading if you are interested in the ideas outlined above. A collection of Minsky’s papers can also be found here  if you are interested in exploring his thinking more deeply.

I have been doing a bit of research lately both on the question of what exactly does Expected Loss “expect” and on the ways in which cycle downturns are defined. I may be missing something, but I find this distinction between endogenous and exogenous factors largely missing from the discussion papers that I have found so far and from stress testing itself. I would greatly appreciate some suggestions if anyone has come across any good material on the issue.

Tony