Banks Are Managing Their Stress – Bloomberg

The ever reliable Matt Levine discusses the latest stress test results for the US banks. In particular the disconnect between the severity of the assumptions in the hypothetical scenario and the actual results observed to date. He notes that it is still early and plenty of room for the actual outcomes to catch up with the hypothetical. However, one of the issues with stress testing is the way you model the way people (and governments) respond to stress.

As Matt puts it …

But another important answer is that, when a crisis actually happens, people do something about it. They react, and try to make it better. In the case of the coronavirus crisis, the Fed and the U.S. government tried to mitigate the effect of a real disaster on economic and financial conditions. Unemployment is really high, but some of the consequences are mitigated by stimulus payments and increased unemployment benefits. Asset prices fell sharply, but then rose sharply as the Fed backstopped markets. Financing markets seized up, and then the Fed fixed them.

The banks themselves also acted to make things better, at least for themselves. One thing that often happens in a financial crisis is that banks’ trading desks make a killing trading for clients in turbulent markets, which helps to make up for some of the money they lose on bad loans. And in fact many banks had blowout first quarters in their trading divisions: Clients wanted to trade and would pay a lot for liquidity, and banks took their money.

In a hypothetical stress test, you can’t really account for any of this. If you’re a bank, and the Fed asks you to model how you’d handle a huge financial crisis, you can’t really write down “I would simply make a ton of money trading derivatives.” It is too cute, too optimistic. But in reality, lots of banks just went and did that.

Similarly, you obviously can’t write down “I would simply rely on the Fed to backstop asset prices and liquidity.” That is super cheating. Much of the purpose of the stress tests is to make it so the Fed doesn’t have to bail out the banking system; the point is to demonstrate that the banks can survive a financial crisis on their own without government support. But in reality, having a functioning financial system is better than not having that, so the Fed did intervene; keeping people in their homes is better than foreclosing on them, so the government supported incomes. So the banks are doing much better than you might expect with 13.3% unemployment.

So it is likely that the Fed’s stress test is both not harsh enough, in its economic scenario, and too harsh, in its assumption about how that scenario will affect banks.

Notwithstanding the potential for people to respond to and mitigate stress, there is still plenty of room for reality to catch up with and exceed the hypothetical scenario. Back to Matt…

But the fact that the stress test imagines an economic crisis that is much nicer than reality is still a little embarrassing, and the Fed can’t really say “everything is fine even in the terrible downside case of 10% unemployment, the banks are doing great.” So it also produced some new stress-test results (well, not quite a full stress test but a “sensitivity analysis”) assuming various scenarios about the recovery from the Covid crisis (“a rapid V-shaped recovery,” “a slower, more U-shaped recovery,” and “a W-shaped double dip recession”). The banks are much less well capitalized in those scenarios than they are either (1) now or (2) in the original stress tests, though mostly still okay, and the Fed is asking the banks to reconsider stress and capital based on current reality. Also stop share buybacks:

Worth reading

Tony – From the Outside

Navigating a radically uncertain world

The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.

This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.

No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.

It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.

Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.

However, there were two things I found especially interesting in their analysis.

  • One was the argument that the “biases” catalogued by behavioural finance were not necessarily irrational when applied to a radically uncertain world.
  • The other was the emphasis they place on the idea of employing abductive reasoning and reference narratives to help navigate this radically uncertain future.

Behavioural Finance

Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.

Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…

…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.

I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.

Abductive Reasoning and Reference Narratives

The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.

If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.

The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.

Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Reference Narratives

If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.

Good and Bad Reference Narratives

Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.

Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.

Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,

…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.

Using Reference Narratives

Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).

Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..

Scenario planning is a way of ordering thoughts about the future, not of predicting it.

The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.

Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future

We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.

Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.

‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’

Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives

These extracts from their book nicely captures the essence of their argument

Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.

the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.

Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.


The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.

robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.

Summing Up

Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.

I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.

Tony – From The Outside

Worth reading – “Radical Uncertainty: Decision-Making for an Unknowable Future” by John Kay and Mervyn King

I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.

Kay and King cover a lot of ground but, simply put, their book is about

“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.” 

One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.

As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.

The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.

Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.

Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.

They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty

But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.

There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.

“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”

Keynes and Knight lost the debate

Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions. 

Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.

There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:

There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.

One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.

Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.

I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.

Tony (From the Outside)

Possible pitfalls of a 1-in-X approach to financial stability – Bank Underground

Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed are those of the authors, and are not necessarily those of the Bank of England, or its policy committees. Posting on this blog, Adam Brinley Codd and Andrew Gimber argue that false confidence in people’s ability to calculate probabilities of rare events might end up worsening the crises regulators are trying to prevent.

The post concludes with their personal observations about how best to deal with this meta-uncertainty.

Policymakers could avoid talking about probabilities altogether. Instead of a 1-in-X event, the Bank of England’s Annual Cyclical Scenario is described as a “coherent ‘tail risk’ scenario”.

Policymakers could avoid some of the cognitive biases that afflict people’s thinking about low-probability events, by rephrasing low-probability events in terms of less extreme numbers. A “100-year” flood has a 1% chance of happening in any given year, but anyone who lives into their 70s is more likely than not to see one in their lifetime.

Policymakers could  be vocal about the fact that there are worse outcomes beyond the 1-in-X point of the distribution.

— Read on bankunderground.co.uk/2020/02/06/possible-pitfalls-of-a-1-in-x-approach-to-financial-stability/

A BCBS review of the costs and benefits of higher bank capital requirements

The economic rational for higher bank capital requirements that have been implemented under Basel III is built to a large extent on an analytical model developed by the BCBS that was published in a study released in 2010. The BCBS has just (June 2019) released a paper by one of its working groups which reviews the original analysis in the light of subsequent studies into the optimal capital question. The 2019 Review concludes that the higher capital requirements recommended by the original study have been supported by these subsequent studies and, if anything, the optimal level of capital may be higher than that identified in the original analysis.

Consistent with the Basel Committee’s original assessment, this paper finds that the net macroeconomic benefits of capital requirements are positive over a wide range of capital levels. Under certain assumptions, the literature finds that the net benefits of higher capital requirements may have been understated in the original Committee assessment. Put differently, the range of estimates for the theoretically-optimal level of capital requirements … is likely either similar or higher than was originally estimated by the Basel Committee.

The costs and benefits of bank capital – a review of the literature; BCBS Working Paper (June 2019)

For anyone who is interested in really understanding this question as opposed to simply looking for evidence to support a preconceived bias or vested interest, it is worth digging a bit deeper into what the paper says. A good place to start is Table 1 from the 2019 Review (copied below) which compares the assumptions, estimates and conclusions of these studies:

Pay attention to the fine print

All of these studies share a common analytical model which measures Net benefits as a function of:

Reduced Crisis Probability x Crisis Cost – Output Drag (loan spreads).

So the extent of any net benefit depends on the extent to which:

  • More capital actually reduces the probability of a crisis and/or its economic impact,
  • The economic impact of a financial crisis is a permanent or temporary adjustment to the long term growth trajectory of the economy – a permanent effect supports the case for higher capital, and
  • The cost of bank debt declines in response to higher capital – in technical terms the extent of the Modigliani Miller (MM) offset, with a larger offset supporting the case for higher capital.

The authors of the 2019 Review also acknowledge that interpretation of the results of the studies is complicated by the fact that different studies use different measures of capital adequacy. Some of the studies provide optimal capital estimates in risk weighted ratios, others in leverage ratios. The authors of the 2019 Review have attempted to convert the leverage ratios to a risk weighted equivalent but that process will inevitably be an imperfect science. The definition of capital also differs (TCE, Tier 1 & CET1).

The authors acknowledge that full standardisation of capital ratios is very complex and lies beyond the scope of their review and nominate this as an area where further research would be beneficial. In the interim (and at the risk of stating the obvious) the results and conclusions of this 2019 Review and the individual studies it references should be used with care. The studies dating from 2017, for example, seem to support a higher value for the optimal capital range compared to the 2010 benchmark. The problem is that it is not clear how these higher nominal ratio results should be interpreted in the light of increases in capital deductions and average risk weights such as we have seen play out in Australia.

The remainder of this post will attempt to dig a bit deeper into some of the components of the net benefit model employed in these types of studies.

Stability benefits – reduced probability of a crisis

The original 2010 BCBS study concluded that increasing Tangible Common Equity from 7% to 10% would reduce the probability of a financial crisis by 1.6 percentage points.

The general principle is that a financial crisis is a special class of economic downturn in which the severity and duration is exacerbated by a collapse in confidence in the banking system due to widespread doubts about the solvency of one or more banks which results in a contraction in the supply of credit.

It follows that higher capital reduces the odds that any given level of loss can threaten the actual or perceived solvency of the banking system. So far so good, but I think it is helpful at this point to distinguish the core losses that flow from the underlying problem (e.g. poor credit origination or risk management) versus the added losses that arise when credit supply freezes in response to concerns about the solvency or liquidity of the banking system.

Higher capital (and liquidity) requirements can help to mitigate the risk of those second round losses but they do not in any way reduce the economic costs of the initial poor lending or risk management. The studies however seem to use the total losses experienced in historical financial crises to calculate the net benefit rather than specific output losses that can be attributed to credit shortages and any related drop in employment and/or the confidence of business and consumers. That poses the risk that the studies may be over estimating the potential benefits of higher capital.

This is not saying that higher capital requirements are a waste of time but the modelling of optimal capital requirements must still understand the limitations of what capital can and cannot change. There is, for example, evidence that macro prudential policy tools may be more effective tools for managing the risk of systemic failures of credit risk management as opposed to relying on the market discipline of equity investors being required to commit more “skin in the game“.

Cost of a banking crisis

The 2019 Review notes that

“recent refinements associated with identifying crises is promising. Such refinements have the potential to affect estimates of the short- and long-run costs of crises as well as our understanding of how pre-crisis financial conditions affect these costs. Moreover, the identification of crises is important for estimating the relationship between banking system capitalisation and the probability of a crisis, which is likely to depend on real drivers (eg changes in employment) as well as financial drivers (eg bank capital).

We considered above the possibility that there may be fundamental limitations on the extent to which capital alone can impact the probability, severity and duration of a financial crisis. The 2019 Review also acknowledges that there is an ongoing debate, far from settled, regarding the extent to which a financial crisis has a permanent or temporary effect on the long run growth trajectory of an economy. This seemingly technical point has a very significant impact on the point at which these studies conclude that the costs of higher capital outweigh the benefits.

The high range estimates of the optimal capital requirement in these studies typically assume that the impacts are permanent. This is big topic in itself but Michael Redell’s blog did a post that goes into this question in some detail and is worth reading.

Banking funding costs – the MM offset

The original BCBS study assumed zero offset (i.e. no decline in lending rates in response to deleveraging). This assumption increase the modelled impact of higher capital and, all other things equal, reduces the optimal capital level. The later studies noted in the BCBS 2019 Review have tended to assume higher levels of MM offset and the 2019 Review concludes that the “… assumption of a zero offset likely overstated the costs of higher capital nonbank loan rates”. For the time being the 2019 Review proposes that “a fair reading of the literature would suggest the middle of the 0 and 100% extremes” and calls for more research to “… help ground the Modigliani-Miller offset used in estimating optimal bank capital ratios”.

Employing a higher MM offset supports a higher optimal capital ratio but I am not convinced that even the 50% “split the difference” compromise is the right call. I am not disputing the general principle that risk and leverage are related. My concern is that the application of this general principle does not recognise the way in which some distinguishing features of bank balance sheets impact bank financing costs and the risk reward equations faced by different groups of bank stakeholders. I have done a few posts previously (here and here) that explore this question in more depth.

Bottom line – the BCBS itself is well aware of most of the issues with optimal capital studies discussed in this post – so be wary of anyone making definitive statements about what these studies tell us.

The above conclusion is however subject to a number of important considerations. First, estimates of optimal capital are sensitive to a number of assumptions and design choices. For example, the literature differs in judgments made about the permanence of crisis effects as well as assumptions about the efficacy of post crisis reforms – such as liquidity regulations and bank resolution regimes – in reducing the probability and costs of future banking crisis. In some cases, these judgements can offset the upward tendency in the range of optimal capital.

Second, differences in (net) benefit estimates can reflect different conditioning assumptions such as starting levels of capital or default thresholds (the capital ratio at which firms are assumed to fail) when estimating the impact of capital in reducing crisis probabilities.2

Finally, the estimates are based on capital ratios that are measured in different units. For example, some studies provide optimal capital estimates in risk-weighted ratios, others in leverage ratios. And, across the risk-weighted ratio estimates, the definition of capital and risk-weighted assets (RWAs) can also differ (eg tangible common equity (TCE) or Tier 1 or common equity tier 1 (CET1) capital; Basel II RWAs vs Basel III measures of RWAs). A full standardisation of the different estimates across studies to allow for all of these considerations is not possible on the basis of the information available and lies beyond the scope of this paper.

This paper also suggests a set of issues which warrant further monitoring and research. This includes the link between capital and the cost and probability of crises, accounting for the effects of liquidity regulations, resolution regimes and counter-cyclical capital buffers, and the impact of regulation on loan quantities.

The costs and benefits of bank capital – a review of the literature; BCBS Working Paper (June 2019)

Summing up

I would recommend this 2019 Literature Review to anyone interested in the question of how to determine the optimal capital requirements for banks. The topic is complex and important and also one where I am acutely aware that I may be missing something. I repeat the warning above about anyone (including me) making definitive statements based on these types of studies.

That said, the Review does appear to offer support for the steps the BCBS has taken thus far to increase capital and liquidity requirements. There are also elements of the paper that might be used to support the argument that bank capital requirements should be higher again. This is the area where I think the fine print offers a more nuanced perspective.

Tony

How much capital is enough? – The NZ perspective

The RBNZ has delivered the 4th instalment in a Capital Review process that was initiated in March 2017 and has a way to run yet. The latest consultation paper addresses the question “How much capital is enough?”.  The banking industry has until 29 March 2019 to respond with their views but the RBNZ proposed answer is:

  • A Tier 1 capital requirement of 16% of RWA for systemically important banks and 15% of RWA for all other banks
  • The Tier 1 minimum requirement to remain unchanged at 6% (with AT1 capital continuing to be eligible to contribute a maximum of 1.5 percentage points)
  • The proposed increased capital requirement to be implemented via an overall prudential capital buffer of 9-10% of RWA comprised entirely of CET1 capital;
    • Capital Conservation Buffer 7.5% (currently 2.5%)
    • D-SIB Buffer 1.0% (no change)
    • Counter-cyclical buffer 1.5% (currently 0%)

The increase in the capital ratio requirement is proposed to be supplemented with a series of initiatives that will increase the RWA of IRB banks:

  • The RBNZ proposes to 1) remove the option to apply IRB RW to sovereign and bank exposures,  2) increase the IRB scalar (from 1.06 to 1.20) and 3) to introduce an output floor set at 85% of the Standardised RWA on an aggregate portfolio basis
  • As at March 2018, RWA’s produced by the IRB approach averaged 76% of the Standardised Approach and the RBNZ estimate that the overall impact will be to increase the aggregate RWA to 90% of the outcome generated by the Standardised approach (i.e. the IRB changes, not the output floor, drive the increase in RWA)
  • Aggregate RWA across the four IRB banks therefore increases by approximately 16%, or $39bn, compared to March 2018 but the exact impact will depend on how IRB banks respond to the higher capital requirements

The RBNZ has also posed the question whether a Tier 2 capital requirement continues to be relevant given the substantial increase in Tier 1 capital.

Some preliminary thoughts …

There is a lot to unpack in this paper so this post will only scratch the surface of the issues it raises …

  • The overall number that the RBNZ proposes (16%) is not surprising.It looks to be at the lower end of what other prudential regulators are proposing in nominal terms
  • But is in the same ball park once you allow for the substantial increase in IRB RWA and the fact that it is pretty much entirely CET1 capital
  • What is really interesting is the fundamentally different approach that the RBNZ has adopted to Tier 2 capital and bail-in versus what APRA (and arguably the rest of the world) has adopted
    • The RBNZ proposal that the increased capital requirement take the form of CET1 capital reflects its belief that “contingent convertible instruments” should be excluded from what counts as capital
    • Exactly why the RBNZ has adopted this position is a complex post in itself (their paper on the topic can be found here) but the short version (as I understand it) is that they think bail-in capital instruments triggered by non-viability are too complex and probably won’t work anyway.
    • Their suggestion that Tier 2 probably does not have a role in the capital structure they have proposed is logical if you accept their premise that Point of Non-Viability (PONV) triggers and bail-in do not work.
  • The RBNZ highlight a significantly enhanced role for prudential capital buffersI am generally in favour of bigger, more dynamic, capital buffers rather than higher fixed minimum requirements and I have argued previously in favour of the base rate for the counter-cyclical being a positive value (the RBNZ propose 1.5%)
    • But the overall size of the total CET1 capital buffer requirement requires some more considered thought about 1) the role of bail-in  structures and PONV triggers in the capital regulation toolkit (as noted above) and 2) whether the impacts of the higher common equity requirement will be as benign as the RBNZ analysis suggests
  • I am also not sure that the indicative capital conservation responses they have outlined (i.e. discretionary distributions limited to 60% of net earnings in the first 250bp of the buffer, falling to 30% in the next 250bp and no distributions thereafter) make sense in practice.
    • This is because I doubt there will be any net earnings to distribute if losses are sufficient to reduce CET1 capital by 250bp so the increasing capital conservation requirement is irrelevant.
  • Last, but possibly most importantly, we need to consider the impact on the Australian parents of the NZ D-SIB banks and how APRA responds. The increase in CET1 capital proposed for the NZ subsidiaries implies that, for any given amount of CET1 capital held by the Level 2 Banking Group, the increased strength of the NZ subsidiaries will be achieved at the expense of the Australian banking entities
    • Note however that the impact of the higher capital requirement in NZ will tend to be masked by the technicalities of how bank capital ratios are calculated.
      • It probably won’t impact the Level 2 capital ratios at all since these are a consolidated view of the combined banking group operations of the Group as a whole
      • The Level 1 capital ratios for the Australian banks also treat investments in bank subsidiaries relatively generously (capital invested in unlisted subsidiaries is treated as a 400% risk weighted asset rather than a capital deduction).

Conclusion

Overall, I believe that the RBNZ is well within its rights to expect the banks it supervises to maintain a total level of loss absorbing capital of 16% or more. The enhanced role for capital buffers is also a welcome move.

The issue is whether relying almost entirely on CET1 capital is the right way to achieve this objective. This is however an issue that has been debated for many decades with no clear resolution. It will take some time to fully unpack the RBNZ argument and figure out how best to articulate why I disagree. In the interim, any feedback on the issues I have outlined above would be most welcome.

Tony

Will Expected Loss loan provisioning reduce pro cyclicality?

I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had  to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.

Borio’s position is that:

  • Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
  • But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
  • So there is a need to develop what he calls “capital filters” (a generic term encompassing   capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.

There are two ways to respond to Claudio Borio’s observations on this topic:

  1. One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
  2. The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions

Will ECL reduce the risk of pro cyclicality?

It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.

The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.

This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.

The solution to pro cyclicality lies in capital not expected loss

So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:

  • let accounting standards do what they do, don’t try to make them solve prudential problems;
  • construct a capital adequacy solution that complements the accounting based measurement of capital and profits.

Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.

In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Stress Testing – Do (really) bad things happen to good banks?

This post will focus on stress testing in response to some recent papers the RBNZ released (July 2018) describing both its approach to stress testing and the outcomes from the 2017 stress test of the major banks and a speech by Wayne Byres (APRA) which offered some detail of the Australian side of the joint stress testing undertaken by APRA and the RBNZ. I intend to make some observations related to this specific stress testing exercise but also some broader points about the ways that stress testing is currently conducted. The overriding point is that the cyclical scenarios employed to calibrate capital buffers seem to focus on “what” happened with less consideration given to “why” the historical episodes of financial stress the scenarios mimic were so severe.

There will be technical detail in this post but the question, simply put, is to what extent do really bad things happen to good banking systems? Paraphrased in more technical language, are we calibrating for scenarios based on the impact of some random exogenous shock on a sound banking system, or does the scenario implicitly assume some systemic endogenous factors at play that made the financial system less resilient in the lead up to the shock? Endogenous factors may be embedded in the balance sheets of the banks (e.g. poor credit quality amplified by excessive credit growth) or perhaps they are a feature of the economic system (e.g. a fixed exchange rate regime such as confronted many European economies during the GFC) that may or may not be universally relevant. I am focusing on the RBNZ stress test to explore these points mostly because they offered the most detail but I believe their approach is very similar to APRA’s and the observations apply generally to macro economic stress testing.

No prizes for guessing that I will be arguing that the kinds of really severe downturns typically used to calibrate capital buffers are usually associated with conditions where endogenous forces within the banking system are a key element in explaining the extent of the asset price declines and weak recoveries and that the severity of some historical scenarios was arguably exacerbated by unhelpful exchange rate, monetary or fiscal policy settings. This is not to say that we should not be using very severe downturns to calibrate the resilience of capital buffers. My argument is simply that recognising this factor will help make more sense of how to reconcile the supervisory approach with internal stress testing and how best to respond to the consequences of such scenarios.

The RBNZ approach to stress testing

The RBNZ characterises its approach to be at the less intensive end of the spectrum of supervisory practice so “stress tests are used to provide insights into the adequacy of bank capital buffers and can highlight vulnerabilities at the bank wide level or in its various loan portfolios” but “… the use of individual bank results in setting capital buffers and promoting market discipline is relatively limited“. The RBNZ stress tests fall into three categories 1)  cyclical scenarios, 2) exploratory stress tests and 3) targeted tests.

This post will focus on the cyclical scenario which was the focus of the RBNZ’s 2017 stress test and the place where the question of what happened and why it happened is most at risk of getting lost amongst the desire to make the test tough, coupled with the often daunting task of just running the test and getting some results.

The RBNZ states that the aim of a cyclical scenario is to help “… understand the resilience of participating banks to a macroeconomic downturn” so these scenarios “… mimic some of the worst downturns in advanced economies since World War 2, and typically feature sharp declines in economic activity and property prices, and stressed funding markets”. The repetition of the benchmark cyclical downturn scenario over time also allows the RBNZ “to track the resilience of the financial system over time (although the scenario will 

It is hard to argue with calibrating the resilience of the banking system to a very high standard of safety. That said, the concern I have with cyclical scenarios drawn from worst case historical events is that the approach tends to skip over the question of why the downturn of such severity occurred.

The RBNZ commentary does recognise the “… need to take account of the nature of the specific stress scenario” and for the cyclical scenario to “evolve based on new research and insights, such as the extent of over-valuation evident in property markets” and the possibility that “domestic monetary policy and a falling exchange rate would provide a significant buffer … that was unavailable during many of these stress episodes in countries without floating exchange rates“. “Exploratory” and “Targeted” stress testing may also be focussed on the endogenous risks embedded in the banking system without explicitly using that terminology.

So if the RBNZ, and APRA, are implicitly aware of the endogenous/exogenous risk distinction, then maybe I am just being pedantic but I would argue that greater clarity on this aspect of stress testing helps in a number of areas:

  • It can help to explain why there is often a gap between:The severity of outcomes modelled internally (where the bank will probably assume their portfolios has robust credit quality and none of the systemic weaknesses that were responsible for past episodes of severe financial weakness implicit in the downturn scenario adopted by the supervisors), andThe severity the regulator expects (possibly based on a skeptical view of the extent to which bank management has balanced risk and return with the reward of higher growth and market share).
  • The types of recovery actions that can be deployed and the amounts of capital they contribute to the rebuilding process are also very much shaped by the nature of the scenario (scenarios shaped by endogenous factors embedded in the banks’ balance sheets or business models require much more substantial responses that are more costly though the cost can be a secondary issue when the scale of the challenge is so large).
  • Supervisors rightly focus on the need for banks to maintain the supply of credit to the economy but endogenous scenarios may actually require that some customers de-gear themselves and become less reliant on bank credit.

The RBNZ discussion of the 2017 stress test of the major banks focussed on the Phase 2 results and noted that:

  • The four participating banks started the stress test with an aggregate CET1 ratio of 10.3% and an aggregate buffer ratio of 5.4%
  • The impact of the combined macro economic downturn and the operational risk event saw the aggregate CET1 ratio decline by 3.4 percentage points to 6.9% in the third year; driven in order of importance by:
    • Credit losses (including the large counter party loss) – 6.6 ppts
    • Growth in RWA – 1.4 ppts
    • Dividends and other capital deductions – 1.4 ppts
    • The operational risk event for misconduct risk – 0.7 ppts
    • Underlying profits which offset the gross decline in the CET1 ratio by 6.7 ppts to arrive at the net decline of 3.4 ppts
  • Mitigating actions improved the aggregate CET1 ratio by 1.1 ppts by year three to 8%; these actions included 1) reductions in lending, 2) additional interest rate repricing and 3) operating expense reductions.

There is not a lot of detail on individual bank outcomes. In the combined scenario, individual bank CET1 ratios declined to between 6.4% to 7.4% versus the 6.9% aggregate result. The individual buffer ratios fell to between 1.2 and 1.4% at their low points (no aggregate minimum buffer was reported).

Some observations on the outcomes of the RBNZ 2017 stress test

The fact that the major banks can maintain significant buffers above minimum capital requirements during quite severe stress scenarios offers a degree of comfort, especially when you factor in the absence of mitigating responses. Minor quibble here, but it is worth noting that the aggregate data the RBNZ uses to discuss the stress testing results does not map neatly to the minimum requirements and capital buffers applied at the individual bank level. A 5.4 ppt buffer over the 4.5% CET1 buffer equates to 9.9%, not 10.3%. Looking at Figure 1 in the “outcomes” paper also shows that there was a narrower range in the CCB at its low point than there was for the CET1 ratio so part of the CCB decline observed in the stress test may be attributable to shortfalls at either the Tier 1 or Total Capital Ratio levels rather than CET1. Small point, but it does matter when interpreting what the results mean for the target capital structure and how to respond.

The RBNZ is clearly correct to question the reliability of mitigating actions and the potential for some actions, such as tightening of lending standards, to generate negative feedback effects on asset prices and economic activity. However, it is equally open to question whether the market confidence that banks rely on to fund themselves and otherwise conduct business would remain resilient in the face of a three-year continuous decline in capital ratios. So I do not think we can take too much confidence in the pre mitigation outcomes alone; the mitigating responses matter just as much.

I have always thought of the capital buffer as simply “buying time” for management to recognise the problem and craft a response that addresses the core problems in the business while creating positive momentum in capital formation. The critical question in stress testing is how much time will the markets grant before they start to hold back from dealing with your bank. Markets do not necessarily expect a magic bullet, but they do expect to see positive momentum and a coherent narrative.  It would also be useful to distinguish between a core set of actions that could reasonably be relied on and other actions that are less reliable or come at a higher cost to the business.

It is hard to comment on the specific mitigating actions since the paper only reports an aggregate benefit of 1.1 ppts over the 3 years but I can make the following general observations:

  • Reductions in lending: The potential for reduced lending to generate negative feedback effects on asset prices and economic activity is a very valid concern but I do struggle to reconcile a 35% decline in house prices with a scenario in which the loans the banking system has outstanding to this sector do not appear to have declined.
    • I can’t see any specific numbers in the RBNZ paper but that is the inference I draw if the overall loan book has not declined, which seems to be implied by the statement that the 20% RWA growth over the first three years of the scenario was primarily due to higher risk weights.
    • Loan principal is progressively being repaid on performing loans but this balance sheet shrinkage is amplified in the scenario by elevated defaults, while the rate of new lending which would otherwise be the driver of growth in outstanding must be slowing if house prices are falling by such a large amount. In addition, the reduced volume of new loans being written are I assume for lower amounts than was the case prior to the decline in house prices.
    • I am very happy to be set straight on this part of the modelling but the numbers don’t quite add up for me. If I am right then a loan book that is stable or even declining in value may be what is implied by the scenario rather than something that adds further to the stress on capital ratios. At the very least, winding back loan growth assumptions relative to the benign base case seems a reasonable response.
  • Repricing: I can’t tell from the RBNZ paper how significant this factor was in contributing to the 1.1 percentage point 3 year improvement in CET1 but I am guessing it was reasonably material. Materiality therefore requires that the numbers be subject to a higher level of scrutiny.
    • History does offer a reasonable body of evidence that Australian and NZ banks have had the capacity to reprice loans under stress and in response to higher funding costs. The question is whether the collapse in trust in big banks has undermined the value of the repricing option they have traditionally benefited from.
    • I do believe that some of the critiques of bank repricing are not well founded but that does not change the real politic of the likely public and government push back should banks attempt to do so.
    • So the answer here is probably yes; the benefits of this particular mitigating action are likely not as reliable as they have been in the past. At the very least, there is likely to be a higher cost to using them.
  • The contribution of RWA growth to the decline in the capital ratio noted in the RBNZ paper is also worth calling out. There is not a lot of detail in the paper but it does appear that the 20% increase in RWA over the first three years of the scenario was driven primarily by an increase in the average credit RW from 45% to 54%.
    • This seems to imply that there was a significant cycle driven increase in capital requirements over the course of the scenario that was not driven by an increase in loans outstanding.
    • I believe that this kind of capital measurement driven impact on capital ratios is fundamentally different from the impact of actual losses and higher new lending but it is treated as equivalent for the purposes of the analysis. This looks to me like a category error; a decline in a capital ratio due to higher risk weights is not the same thing for the purposes of solvency as a loss due to a loan defaulting.
    • The solution probably lies in a better designed approach to counter cyclical buffers (see my post here and here for background) and the regulatory treatment of expected loss, but the stress testing analysis suffers by simply noting the outcome without going behind what that component of the decline in capital ratio actually represents.

Deposit growth under a stress scenario

I also struggled with the statement in Section 5 of the RBNZ paper that “Banks expected strong growth in retail deposits, in line with their experience during the Global Financial Crisis.

  • This statement seems to reflect the intuitive view that bank deposits increase under adverse conditions as people sell risky assets and put their money in banks. But we also know that selling a risky asset requires someone else to buy it, so the increase in cash in the account of the seller is offset by the decrease in the account of the buyer. There was an increase in bank deposits during the GFC but the simple sell risky assets and put your money in the bank does not seem to explain why it happened.
  • So what do we know about the GFC? Firstly, big banks continued to grow their loan book and we know that bank credit creation leads to deposit creation. The GFC was also a scenario where the collapse of securitisation markets saw lending for residential mortgages migrate back to big bank balance sheets. I think this also creates a net increase in deposits. Banks were also paying down foreign borrowings which I think is also positive for deposit creation via the balance of payments though this channel is murkier. We also observed money migrating from equities to property lending. The selling of the risky assets is net square for deposits by itself but the deposit creation comes as the cash in the hands of the seller gets leveraged up to support new credit creation via the increased property loans which are typically geared much more highly than other types of risk assets. The shift from equity to property also seems to be driven by the typical monetary policy strategy of reducing interest rates.
  • So it is not clear to me that the pool of deposits grows under the conditions of the RBNZ scenario. We do have the likelihood that people are selling risky assets but we seem to be missing a number of the elements specific to the GFC that saw new deposits get created in the banking system. The only deposit formation positive I can see is maybe via the balance of payments but, as noted above this, channel is very murky and hard to understand.
  • The other interesting question is whether bank deposits continue to be a safe haven for New Zealanders in future crises given that the RBNZ has implemented an Open Banking Resolution regime that exposes bank deposits to the risk of being bailed-in on a pari passu basis with other unsecured bank creditors. This is a unique feature of the NZ financial system which even eschews the limited guarantees of bank deposits that many other systems see as essential to maintaining the confidence of depositors under stress.

I may well be missing something here so I am very happy to hear the other side to any of the observations I have offered above. I am big believer in the value of stress testing which is why I think it is so important to get it right.

Tony

Worth Reading “The Money Formula” by Paul Wilmott and David Orrell.

The full title of this book, co-written by Paul Wilmott and David Orrell, is “The Money Formula: Dodgy Finance, Pseudo Science, and How Mathematicians Took over the Markets“. There are plenty of critiques of modelling and quantitative finance by outsiders throwing rocks but Wilmott is a quant and brings an insider’s technical knowledge to the question of what these tools can do, can’t do and perhaps most importantly should not be used to do. Consequently, the book offers a more nuanced perspective on the strengths and limitations of quantitative finance as opposed to the let’s scrap the whole thing school of thought. I have made some more detailed notes which follow the structure of the book but this post focuses on a couple of ideas I found especially interesting or useful.

I am not a quant so my comments should be read with that in mind but the core idea I took away is that, much as quants would want it otherwise, markets are not determined by fundamental laws, deterministic or probabilistic that allow risk to be measured with precision. These ideas work reasonably well within their “zone of validity” but a more complete answer (or model) has to recognise where the zones stop and uncertainty rules.  Wilmott and Orrell argue market outcomes are better thought of as the “emergent result of complex transactions”. The role of money in these emergent results is especially important, as is the capacity of models themselves to materially reshape the risk of the markets they are attempting to measure.

The Role of Money

Some quotes I have drawn from Chapter 8, will let the authors speak for themselves on the role of money …

Consider …. the nature of money. Standard economic definitions of money concentrate on its roles as a “medium of exchange,” a “store of value,” and a “unit of account.” Economists such as Paul Samuelson have focused in particular on the first, defining money as “anything that serves as a commonly accepted medium of exchange.” … ” Money is therefore not something important in itself; it is only a kind of token. The overall picture is of the economy as a giant barter system, with money acting as an inert facilitator.” (emphasis added)

“However … money is far more interesting than that, and actually harbors its own kind of lively, dualistic properties. In particular, it merges two things, number and value, which have very different properties:number lives in the abstract, virtual world of mathematics, while valued objects live in the real world. But money seems to be an active part of the system. So ignoring it misses important relationships. The tension between these contradictory aspects is what gives money its powerful and paradoxical qualities.” (Emphasis added)

The real and the virtual become blurred, in physics or in finance. And just as Newtonian theories break down in physics, so our Newtonian approach to money breaks down in economics. In particular, one consequence is that we have tended to take debt less seriously than we should. (emphasis added)

Instead of facing up to the intrinsically uncertain nature of money and the economy, relaxing some of those tidy assumptions, accepting that markets have emergent properties that resist reduction to simple laws, and building a new and more realistic theory of economics, quants instead glommed on to the idea that, when a system is unpredictable, you can just switch to making probabilistic predictions.” (emphasis added)

“The efficient market hypothesis, for example, was based on the mechanical analogy that markets are stable and perturbed randomly by the actions of atomistic individuals. This led to probabilistic risk-analysis tools such as VaR. However, in reality, the “atoms” are not independent, but are closely linked … The result is the non-equilibrium behaviour … observed in real markets. Markets are unpredictable not because they are efficient, but because of a financial version of the uncertainty principle.” (emphasis added)

 The Role of Models

Wilmott & Orrell devote a lot of attention to the ways in which models no longer just describe, but start to influence, the markets being modelled mostly by encouraging people to take on more risk based in part on a false sense of security …

“Because of the bankers’ insistence on treating complex finance as a university end-of-term exam in probability theory, many of the risks in the system are hidden. And when risks are hidden, one is led into a false sense of security. More risk is taken so that when the inevitable happens, it is worse than it could have been. Eventually the probabilities break down, disastrous events become correlated, the cascade of dominoes is triggered, and we have systemic risk …. None of this would matter if the numbers were small … but the numbers are huge” (Chapter 10 – emphasis added)

They see High Frequency Trading as the area likely to give rise to a future systemic crisis but also make a broader point about the tension between efficiency and resilience..

“With complex systems, there is usually a trade-off between efficiency and robustness …. Introducing friction into the system – for example by putting regulatory brakes on HFT – will slow the markets, but also make them more transparent and reliable. If we want a more robust and resilient system then we probably need to agree to forego some efficiency” (Chapter 10 – emphasis added)

The Laws of Finance

Wilmott and Orrell note the extent to which finance has attempted to identify laws which are analogous to the laws of physics and the ways in which these “laws” have proved to be more of a rough guide.

 “… the “law of supply and demand” …states that the market for a particular product has a certain supply, which tends to increase as the price goes up (more suppliers enter the market). There is also a certain demand for the product, which increases as the price goes down.”

“… while the supply and demand picture might capture a general fuzzy principle, it is far from being a law. For one thing, there is no such thing as a stable “demand” that we can measure independently –there are only transactions.”

“Also, the desire for a product is not independent of supply, or other factors, so it isn’t possible to think of supply and demand as two separate lines. Part of the attraction of luxury goods –or for that matter more basic things, such as housing –is exactly that their supply is limited. And when their price goes up, they are often perceived as more desirable, not less.” (emphasis added)

This example is relevant for banking systems (such as Australia) where residential mortgage lending dominates the balance sheets of the banks. Even more so given that public debate of the risk associated with housing seems often to be predicated on the economics 101 version of the laws of supply and demand.

The Power (and Danger) of Ideas

A recurring theme throughout the book is the ways in which economists and quants have borrowed ideas from physics without recognising the limitations of the analogies and assumptions they have relied on to do so. Wilmott and Orrell credit Sir Issac Newton as one of the inspirations behind Adam Smith’s idea of the “Invisible Hand” co-ordinating  the self interested actions of individuals for the good of society. When the quantum revolution saw physics embrace a probabilistic approach, economists followed.

I don’t think Wilmott and Orrell make this point directly but a recurring thought reading the book was the power of ideas to not just interpret the underlying reality but also to shape the way the economy and society develops not always for the better.

  • Economic laws that drive markets towards equilibrium as their natural state
  • The “invisible hand” operating in markets to reconcile individual self interest with optimal outcomes for society as a whole
  • The Efficient Market Hypothesis as an explanation for why markets are unpredictable

These ideas have widely influenced quantitative finance in a variety of domains and they all contribute useful insights; the key is to not lose sight of their zone of validity.

…. Finance … took exactly the wrong lesson from the quantum revolution. It held on to its Newtonian, mechanistic, symmetric picture of an intrinsically stable economy guided to equilibrium by Adam Smith’s invisible hand. But it adopted the probabilistic mathematics of stochastic calculus.” (emphasis added) Chapter 8

Where to from here?

It should be obvious by now that the authors are arguing that risk and reward cannot be reduced to hard numbers in the ways that physics has used similar principles and tools to generate practical insights into how the world works. Applying a bit of simple math in finance seems to open up the door to getting some control over an unpredictable world and, even better, to pursue optimisation strategies that allow the cognoscenti to optimise the balance between risk and reward. There is room for more complex math as well for those so inclined but the book sides with the increasingly widely held views that simple math is enough to get you into trouble and further complexity is best avoided if possible.

Wilmott and Orrell highlight mathematical biology in general and a book by Jim Murray on the topic as a source for better ways to approach many of the more difficult modelling challenges in finance and economics. They start by listing a series of phenomena in biological models that seem to be useful analogues for what happens in financial markets. They concede that a number of models used in mathematical biology that are almost all “toy” models. None of these models offer precise or determined outcomes but all can be used to explain what is happening in nature and offer insights into solutions for problems like disease control, epidemics, conservation etc.

The approach they advocate seems have a lot in common with the Agent Based Modelling approach that Andrew Haldane references (see his paper on “Tails of the Unexpected“) and that is the focus of Bookstabber’s book (“The End of Theory”).

In their words …

“Embrace the fact that the models are toy, and learn to work within any limitations.”

Focus more attention on measuring and managing resulting model risk, and less time on complicated new products.”

“… only by remaining both skeptical and agile can we learn. Keep your models simple, but remember they are just things you made up, and be ready to update them as new information comes in.”

I fear I have not done the book justice but I got a lot out of it and can recommend it highly.