Climate change – a central banking perspective

A BIS paper titled “Green Swan 2 – Climate change and Covid-19: reflections on efficiency versus resilience” initially caught my attention because of the reference to the tension between efficiency versus resilience. This tension is, for me at least, one of the issues that has tended to be ignored in the pursuit of growth and optimised solutions. The papers mainly deal with the challenges that climate change creates for central banks but I think there are also some insights to be drawn on what it means for bank capital management.

A core argument in the paper is that challenges like climate change and pandemics ….

“… require us to rethink the trade-offs between efficiency and resilience of our socio-economic systems … one way to address this issue is to think about buffers or some necessary degree of redundancy for absorbing such large shocks. Countries build FX reserves, banks maintain capital buffers as required by regulators, and so on. Perhaps similar “buffers” could be used in other areas of our societies. For example, could it be time to reassess our production systems, which are meant to be lean and less costly for maximum efficiency?”

The paper draws on a (much longer and more technical) BIS research paper titled “The green swan: Central banking and financial stability in the age of climate change”. Both papers contain the usual caveat that the views expressed do not necessarily reflect those of their respective institutions. With that warning noted, this post draws on both papers to make some observations about what the papers say, and what this means for bank capital management.

There is a lot of content in the combined papers but the points that resonated the most with me were

  1. Climate change shares some of the features of a Black Swan event but is better thought of a distinct type of risk which the authors label a “Green Swan”.
  2. Green swan problems are created in part by choices we have made regarding the value of efficiency over resilience – part of the solution lies in rethinking these choices but this will not be easy.
  3. Climate change is a “collective action” problem which cannot be addressed by individual actors (including banks) operating independently – market based solutions like a carbon price may also be insufficient to bring about a solution that does not involve an unacceptable level of financial disruption.
  4. Scenario analysis (including stress testing) appears to be one of the better tools for dealing with climate change and similar types of risk – but it needs to be used differently (by both the supervised and the supervisors) from the way it is applied to conventional risks.

I am not an expert on climate change modelling, but Chapter 3 of the second paper also has what looks to be a useful overview of the models used to analyse climate change and how the outputs of these models are used to generate economic impacts.

Black, white and green swans

Climate change clearly operates in the domain of radical uncertainty. As such it shares some common elements with “black swan” events; in particular the fact that conventional risk models and analysis are not well suited to measuring and managing the potential adverse impacts. It is equally important however to understand the ways in which climate change differs from a classic black swan event. There is a longer list but the ones that I found most relevant were:

  1. Predictability – Black Swans are, by definition, not predictable whereas the potential for adverse Climate Change outcomes is well understood even if not universally accepted. The point is that understanding the potential for adverse impact means we have a choice about what to do about it.
  2. Impact – Black Swan events can have substantial impacts but the system can recover (e.g. the GFC has left a lasting impact but economic activity did recover once the losses were absorbed). The impacts of climate change, in contrast, may be irreversible and have the potential to result in people dying in large numbers.

Given the conceptual differences, the authors classify Climate Change as a distinct form which they label a “Green Swan”. To the best of my knowledge, this may be the first time the term has been used in this way. That said, the general point they are making seems to be quite similar to what other authors have labelled as “Grey Rhinos” or “Black Elephants” (the latter an obvious allusion to the “elephant in the room”, a large risk that is visible to everyone but no one wants to address).

A typology of swans
Categorising climate risk

The papers distinguish two main channels through which climate change can affect financial stability – physical risks and transition risks.

Physical risks are defined as

… “those risks that arise from the interaction of climate-related hazards […] with the vulnerability of exposure to human and natural systems” (Batten et al (2016)). They represent the economic costs and financial losses due to increasing frequency and severity of climate-related weather events (eg storms, floods or heat waves) and the effects of long-term changes in climate patterns (eg ocean acidification, rising sea levels or changes in precipitation). The losses incurred by firms across different financial portfolios (eg loans, equities, bonds) can make them more fragile.

Transition risks are defined as those

“… associated with the uncertain financial impacts that could result from a rapid low-carbon transition, including policy changes, reputational impacts, technological breakthroughs or limitations, and shifts in market preferences and social norms.

A rapid and ambitious transition to lower emissions, for example, would obviously be desirable from the perspective of addressing climate change but might also mean that a large fraction of proven reserves of fossil fuel cannot be extracted, becoming “stranded assets”. The write down of the value of these assets may have potentially systemic consequences for the financial system. This transition might occur in response to policy changes or by virtue of some technological breakthrough (e.g. problem of generating cheap energy by nuclear fusion is solved).

Efficiency versus resilience

I started this post with a quote from the first (shorter) paper regarding the way in which the Covid 19 had drawn attention to the extent to which the pursuit of efficiency had made our economies more fragile. The paper explores the ways in which the COVID 19 pandemic exhibits many of the same features that we see in the climate change problem and how the global response to the COVID 19 pandemic might offer some insights into how we should respond to climate change.

The paper is a useful reminder of the nature of the problem but I am less confident that it offers a solution that will work without some form of regulation or public sector investment in the desired level of redundancy. The paper cites bank capital buffers introduced post GFC as an example of what to do but this was a regulated outcome that would most likely not be acceptable for non-financial companies in countries that remain committed to free market ideology.

The Economist published an article on this question that offered numerous examples of similar problems that illustrate the propensity of “humanity, at least as represented by the world’s governments … to ignore them until forced to react” .

Thomas Friedman’s article (“How we broke the world”) is also worth reading on this question …

If recent weeks have shown us anything, it’s that the world is not just flat. It’s fragile.

And we’re the ones who made it that way with our own hands. Just look around. Over the past 20 years, we’ve been steadily removing man-made and natural buffers, redundancies, regulations and norms that provide resilience and protection when big systems — be they ecological, geopolitical or financial — get stressed. We’ve been recklessly removing these buffers out of an obsession with short-term efficiency and growth, or without thinking at all.

The New York Times, 30 May 2020
Managing collective action problems

The second paper, in particular, argues that it is important to improve our understanding of the costs of climate change and to ensure that these costs are incorporated into the prices that drive the resources we allocate to dealing with the challenge (e.g. via a carbon price or tax). However one of its key conclusions is that relying on markets to solve the problem is unlikely to be sufficient even with the help of some form of carbon price that reflects a more complete account of the costs of our current carbon based economy.

In short, the development and improvement of forward-looking risk assessment and climate- related regulation will be essential, but they will not suffice to preserve financial stability in the age of climate change: the deep uncertainty involved and the need for structural transformation of the global socioeconomic system mean that no single model or scenario can provide sufficient information to private and public decision-makers. A corollary is that the integration of climate-related risks into prudential regulation and (to the extent possible) into monetary policy would not suffice to trigger a shift capable of hedging the whole system again against green swan events.

The green swan: Central banking and financial stability in the age of climate change; Chapter 5 (page 66)
Using scenario based methodologies to assess climate related risks

Both papers highlight the limitations of trying to measure and understand climate change using conventional probability based risk management tools. The one area they do see as worth pursuing is using scenario based approaches. This makes sense to me but it is also important to distinguish this kind of analysis from the standard stress testing used to help calibrate capital buffers.

The standard application of stress testing takes a severe but plausible macro economic scenario such as a severe recession and determines what are the likely impacts on capital adequacy ratios. This offers a disciplined way of deciding how much capital surplus is required to support the risk appetite choices a bank has made in pursuit of its business objectives.

A simplistic application of climate based stress testing scenarios might take the same approach; i.e. work out how much the scenario impacts the capital and ensure that the buffer is sufficient to absorb the impact. That I think is not the right conclusion and my read of the BIS papers is that they are not advocating that either. The value of the scenario based modelling is to first get a handle on the size of the problem and how exposed the bank is to it. A capital response may be required but the answer may also be to change the nature of your exposure to the risk. That may involve reduced risk limits but it may also involve active participation in collective action to address the underlying problem. A capital management response may be part of the solution but it is far from the first step.

Conclusion

I have only scratched the surface of this topic in this post but the two papers it references are worth reading if you are interested in the question of what climate change, and related Green Swan or Black Elephant problems, mean for the banking system and for central banking. There is a bit more technical detail in the appendix below but it is likely only of interest for people working at the sharp end of trying to measure and manage the problem.

I want to dig deeper into the question of how you use stress testing to assess climate change and related types of risk but that is a topic best left for another post.

Tony – From the outside

Appendix – Modelling the impacts of climate change

Section 3 of the longer paper (“Measuring climate-related risks with scenario-based approaches”) discusses the limitations of the models that are typically used to generate estimates of the ecological and financial impacts of climate change scenarios. There is plenty of material there for climate sceptics but it also assists true believers to understand the limits of what they can actually know and how coming to terms with the radical uncertainty of how climate change plays out shapes the nature of our response.

I have copied some extracts from the chapter below that will give you a flavour of what it has to say. It is pretty technical so be warned …

“… the standard approach to modelling financial risk consisting in extrapolating historical values (eg PD, market prices) is no longer valid in a world that is fundamentally reshaped by climate change (Weitzman (2011), Kunreuther et al (2013)). In other words, green swan events cannot be captured by traditional risk management.

The current situation can be characterised as an “epistemological obstacle” (Bachelard (1938)). The latter refers to how scientific methods and “intellectual habits that were useful and healthy” under certain circumstances, can progressively become problematic and hamper scientific research. Epistemological obstacles do not refer to the difficulty or complexity inherent to the object studied (eg measuring climate-related risks) but to the difficulty related to the need of redefining the problem”

Page 21

nothing less than an epistemological break (Bachelard, 1938) or a “paradigm shift” (Kuhn (1962)) is needed today to overcome this obstacle and more adequately approach climate-relate risks (Pereira da Silva (2019a)).

In fact, precisely an epistemological break may be taking place in the financial sector: recently emerged methodologies aim to assess climate-related risks while relying on the fundamental hypothesis that, given the lack of historical financial data related to climate change and the deep uncertainty involved, new approaches based on the analysis of prospective scenarios are needed. Unlike probabilistic approaches to financial risk management, they seek to set up plausible hypotheses for the future. This can help financial institutions integrate climate-related risks into their strategic and operational procedures (eg for the purpose of asset allocation, credit rating or insurance underwriting) and financial supervisors assess the vulnerability of specific institutions or the financial system as a whole

Climate-economic models and forward-looking risk analysis are important and can still be improved, but they will not suffice to provide all the information required to hedge against “green swan” events.

As a result of these limitations, two main avenues of action have been proposed. We argue that they should be pursued in parallel rather than in an exclusive manner. First, central banks and supervisors could explore different approaches that can better account for the uncertain and nonlinear features of climate-related risks. Three particular research avenues (see Box 5 below) consist in: (i) working with non- equilibrium models; (ii) conducting sensitivity analyses; and (iii) conducting case studies focusing on specific risks and/or transmission channels. Nevertheless, the descriptive and normative power of these alternative approaches remain limited by the sources of deep and radical uncertainty related to climate change discussed above. That is, the catalytic power of scenario-based analysis, even when grounded in approaches such as non-equilibrium models, will not be sufficient to guide decision-making towards a low-carbon transition.

As a result of this, the second avenue from the perspective of maintaining system stability consists in “going beyond models” and in developing more holistic approaches that can better embrace the deep or radical uncertainty of climate change as well as the need for system-wide action (Aglietta and Espagne (2016), Barmes (2019), Chenet et al (2019a), Ryan-Collins (2019), Svartzman et al (2019)). 

Pages 42 – 43

Embracing deep or radical uncertainty therefore calls for a second “epistemological break” to shift from a management of risks approach to one that seeks to assure the resilience of complex adaptive systems in the face of such uncertainty (Fath et al (2015), Schoon and van der Leeuw (2015)).38 In this view, the current efforts aimed at measuring, managing and supervising climate-related risks will only make sense if they take place within a much broader evolution involving coordination with monetary and fiscal authorities, as well as broader societal changes such as a better integration of sustainability into financial and economic decision-making.

Page 48

Probabilities disguising uncertainty

In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.

Barack Obama commenting on making the decision whether to attack a target which evidence suggested could be Osama Bin Laden

This quote is a drawn from an article that John Kay published on his website under the title “The point of probabilities”. The point he is making is

  • Similar to one touched on in a Bank Underground post that I discussed in a recent post on my blog.
  • Short and worth reading

Tony

Possible pitfalls of a 1-in-X approach to financial stability – Bank Underground

Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed are those of the authors, and are not necessarily those of the Bank of England, or its policy committees. Posting on this blog, Adam Brinley Codd and Andrew Gimber argue that false confidence in people’s ability to calculate probabilities of rare events might end up worsening the crises regulators are trying to prevent.

The post concludes with their personal observations about how best to deal with this meta-uncertainty.

Policymakers could avoid talking about probabilities altogether. Instead of a 1-in-X event, the Bank of England’s Annual Cyclical Scenario is described as a “coherent ‘tail risk’ scenario”.

Policymakers could avoid some of the cognitive biases that afflict people’s thinking about low-probability events, by rephrasing low-probability events in terms of less extreme numbers. A “100-year” flood has a 1% chance of happening in any given year, but anyone who lives into their 70s is more likely than not to see one in their lifetime.

Policymakers could  be vocal about the fact that there are worse outcomes beyond the 1-in-X point of the distribution.

— Read on bankunderground.co.uk/2020/02/06/possible-pitfalls-of-a-1-in-x-approach-to-financial-stability/

Thinking aloud about Australian bank ROE

I have been wanting to put something down on the question of Australian major bank ROE for a while. The issue generates a lot of heat but the public discussion I have observed has been truncated, in my opinion, by misconceptions.

I think we can agree that banks need to be profitable to be healthy and a healthy banking system underpins the health of the economy as a whole. Excessive profitability however is clearly bad for consumers, business and for the economy as a whole. The problem is determining what level of profitability is excessive. This post is unlikely to be the final word on this topic but hopefully it introduces a couple of considerations that seem to me to be largely missing from the public debate.

Most of what I read on this topic seems to treat the ROE of the Australian majors as self evidently excessive and focuses on what to do about it. Exhibit A is the reported ROE which in the 2019 half year updates varied from 10.05% to 14.10%. This is much less than it was but still substantially better than what is being achieved by most banks outside Australia and by the smaller local banks. Exhibit B is the fact that the Australian banking system is an oligopoly which almost by definition earn excess profits.

Reported ROE exceeds COE – case closed

Any discussion of ROE must be anchored by the estimated Cost of Equity (COE), the minimum return that investors require to hold equity risk. There are a variety of ways of calculating this but all of them generate a number that is much less than the ROE the majors currently earn. So case closed.

There is no question that the Australian majors cover their cost of equity, but it is less clear to me that the margin of excess profitability is as excessive as claimed.

Corporate finance 101 teaches us that we can derive a company’s cost of equity using the Capital Asset Pricing Model (CAPM) which holds that the required return is equal to the Risk Free Return plus the Equity Risk Premium (ERP) multiplied by the extent to which the return the individual stock is correlated with the market as a whole. The general idea of being paid a premium for taking on equity risk makes sense but there are a bunch of issues with the CAPM once you get into the detail. One of the more topical being what do you do when the risk free rate approaches zero.

I don’t want to get into the detail of those issues here but will assume for the purposes of this post that a rate of return in the order of 8-10% can be defended as a minimum acceptable return. I recognise that some of the more mechanical applications of the CAPM might generate a figure lower than this if they simply apply a fixed ERP to the current risk free rate.

Two reasons why a simple comparison of ROE and COE may be misleading

  1. Banking is an inherently cyclical business and long term investors require a return that compensates them for accepting this volatility in returns.
  2. Book value does not define market value

Banking is a highly cyclical business – who knew?

It is often asserted that banking is a low risk, “utility” style business and hence that shareholders should expect commensurately low returns. The commentators making these assertions tend to focus on the fact that the GFC demonstrated that it is difficult (arguably impossible) to allow large banks to fail without imposing significant collateral damage on the rest of the economy. Banks receive public sector support to varying degrees that reduces their risk of failure and hence the risk to shareholders. A variation of this argument is that higher bank capital requirements post the GFC have reduced the risk of investing in a bank by reducing the risk of insolvency.

There is no question that banks do occupy a privileged space in the economy due to the central bank liquidity support that is not available to other companies. This privilege (sometimes referred to as a “social licence”) is I think an argument for tempering the kinds of ROE targeted by the banks but it does not necessarily make them a true utility style investment whose earnings are largely unaffected by cyclical downturns.

The reality is that bank ROE will vary materially depending on the state of the credit cycle and this inherent cyclicality is probably accentuated by accounting for loan losses and prudential capital requirements. Loan losses for Australian banks are currently (October 2019) close to their cyclical low points and can be expected to increase markedly when the economy eventually moves into a downturn or outright recession. Exactly how much downside in ROE we can expect is open to debate but history suggests that loan losses could easily be 5 times higher than what we observe under normal economic conditions.

There is also the issue of how often this can be expected to happen. Again using history as a guide for the base rate, it seems that downturns might be expected every 7-10 years on average and long periods without a downturn seem to be associated with increased risk of more severe and prolonged periods of reduced economic activity.

What kind of risk premium does an investor require for this cyclicality? The question may be academic for shareholders who seek to trade in and out of bank stocks based on their view of the state of the cycle but I will assume that banks seek to cater to the concerns and interests of long term shareholders. The answer for these shareholders obviously depends on how frequent and how severe you expect the downturns to be, but back of the envelope calculations suggest to me that you would want ROE during the benign part of the credit cycle to be at least 200bp over the COE and maybe 300bp to compensate for the cyclical risk.

Good risk management capabilities can mitigate this inherent volatility but not eliminate it; banks are inherently cyclical investments on the front line of the business cycle. Conversely, poor risk management or an aggressive growth strategy can have a disproportionately negative impact. It follows that investors will be inclined to pay a premium to book value for banks they believe have good risk management credentials. I will explore this point further in the discussion of book value versus market value.

Book Value versus Market Value

Apart from the cyclical factors discussed above, the simple fact that ROE is higher than COE is frequently cited as “proof” that ROE is excessive. It is important however to examine the unstated assumption that the market value of a bank should be determined by the book value of its equity. To the best of my knowledge, there is no empirical or conceptual basis for this assumption. There are a number of reasons why a company’s share price might trade at a premium or a discount to its book value as prescribed by the relevant accounting standards.

The market may be ascribing value to assets that are not recognised by the accounting standards.The money spent on financial control and risk management, for example, is largely expensed and hence not reflected in the book value of equity. That value however becomes apparent when the bank is under stress. These “investments” cannot eliminate the inherent cyclicality discussed above but they do mitigate those risk.

A culture built on sound risk management and financial control capabilities is difficult to value and won’t be reflected in book value except to the extent it results in conservative valuation and provisioning outcomes. It is however worth something. Investors will pay a premium for the banks they believe have these intangible strengths while discounting or avoiding altogether the shares of banks they believe do not.

Summing up …

This post is in no way an exhaustive treatment of the topic. Its more modest objective was simply to offer a couple of issues to consider before jumping to the conclusion that the ROE earned by the large Australian banks is excessive based on simplistic comparisons of point in time ROE versus mechanical derivations of the theoretical COE.

As always, it is entirely possible that I am missing something – if so let me know what it is ….

Tony

Alternative monetary and financial systems

We probably tend to take the monetary and financial system we have today for granted, somewhat like the air we breathe. I was also challenged during the week to describe a non-money future and my response was judged a failure to look outside the square. The best I could offer was to note that Star Trek imagines a society in which unlimited cheap energy coupled with replicators has made money redundant.

By chance, I came across a couple of articles in recent weeks that offer interesting perspectives on what money is and its role in the economy.

One was by Tyler Cowen marking the 75th anniversary of Bretton Wood’s but more imaginatively subtitled “Every era’s monetary and financial institutions are unimaginable until they are real”. The other was an interview with a German philosopher Stefan Heidenreich discussing his book titled “Money: For a Non-money Economy”.

The Bretton Woods agreement of course is not the system we have today but Cowen makes the point that the system we operate under today would appear equally unlikely to previous generations:

“Currencies are fiat, the ties to gold are gone, and most exchange rates for the major currencies are freely floating, with periodic central bank intervention to manipulate exchange rates. For all the criticism it receives, this arrangement has also proved to be a viable global monetary order, and it has been accompanied by an excellent overall record for global growth.

Yet this fiat monetary order might also have seemed, to previous generations of economists, unlikely to succeed. Fiat currencies were associated with the assignat hyperinflations of the French Revolution, the floating exchange rates and competitive devaluations of the 1920s were not a success, and it was hardly obvious that most of the world’s major central banks would pursue inflation targets of below 2%. Until recent times, the record of floating fiat currencies was mostly disastrous”

Cowen’s main message is that the lesson of history suggests that it is brave to assume that the monetary and financial institution status quo will hold forever – so what comes next?

This brings us to Stefan Heidenreich.

“Stefan Heidenreich believes that some day, money will seem like an ancient religion. In his recent book Money: For a Non-money Economy, the German philosopher and media theorist speculates on how the money-based global economy could soon transition to an entirely different system based on the algorithmic matching of goods and services. Such a system could match people with what they need at a given moment without relying on the concept of a stable, universal price — and, just possibly, do away with the vast inequities caused by the market.

If you find the idea of an economy without money hard to imagine, you’re not alone. As the saying goes, it’s easier to imagine the end of the world than the end of capitalism. But that very difficulty proves Heidenreich’s main point: We have to imagine what may sound like wild possibilities now in order to steer the future before it’s upon us. Getting rid of money could lead to what he calls a “leftist utopia” of equal distribution — or it could enable mass surveillance and algorithmic control on a whole new scale. Faced with the second option, Heidenreich says, we have no choice but to try to envision the first.”

“The Economy of the Future Won’t Rely on Money” Elvia Wilk (Medium 30 November 2018) https://medium.com/s/story/the-economy-of-the-future-wont-rely-on-money-5a703e0ad30b

It is not obvious to me that Heidenreich’s “matching” proposal provides a workable alternative to what we have today but that is not the point. The bigger point raised by both Cowen and Heidenreich is that what we have today is unlikely to be the system that governs our economic interactions in 50 years time so what is the alternative?

Tony

Building applied critical thinking into the structure of an organisation

This article in Bloomberg caught my attention. It is a background piece on a team known as the “Applied Critical Thinking” unit that has been operating inside the New York Federal Reserve since 2016.

The general idea of contrarian thinking and recognising the limitations of what is and is not knowable are not huge innovations in themselves. What was interesting for me is the extent to which this unit can be thought of as a way of building that thought process into the structure of organisations that might otherwise tend towards consensus and groupthink built on simple certainties.

I have touched on this general topic in some previous posts. A review of Paul Wilmott and David Orrell’s book (The Money Formula), for example, discussed their use of the idea of a “Zone of Validity” to define the boundaries of what quantitative modelling could reveal about the financial system. Pixar (the digital animation company) also has some interesting examples of how a culture of candour and speaking truth to power can be built into the structure of an organisation rather than relying on slogans that people be brave or have courage.

I don’t have all the answers but this initiative by the NY Fed is I think worth watching. Something like this seem to me to have the potential to help address some of the culture problems that have undermined trust in large companies (it is not just the banks) and the financial system as a whole.

Tony

Every bank needs a cyclical capital buffer

This post sets out a case for a bank choosing to incorporate a discretionary Cyclical Buffer (CyB) into its Internal Capital Adequacy Assessment Process (ICAAP). The size of the buffer is a risk appetite choice each individual bank must make. The example I have used to illustrate the idea is calibrated to absorb the expected impact of an economic downturn that is severe but not necessarily a financial crisis style event. My objective is to illustrate the ways in which incorporating a Cyclical Buffer in the target capital structure offers:

  • an intuitive connection between a bank’s aggregate risk appetite and its target capital structure;
  • a means of more clearly defining the point where losses transition from expected to unexpected; and
  • a mechanism that reduces both the pro cyclicality of a risk sensitive capital regime and the tendency for the transition to unexpected losses to trigger a loss of confidence in the bank.

The value of improved clarity, coherence and consistency in the risk appetite settings is I think reasonably self evident. The need for greater clarity in the distinction between expected and unexpected loss perhaps less so. The value of this Cyclical Buffer proposal ultimately depends on its capacity to enhance the resilience of the capital adequacy regime in the face of economic downturns without compromising its risk sensitivity.

There are no absolutes when we deal with what happens under stress but I believe a Cyclical Buffer such as is outlined in this post also has the potential to help mitigate the risk of loss of confidence in the bank when losses are no longer part of what stakeholders expect but have moved into the domain of uncertainty. I am not suggesting that this would solve the problem of financial crisis. I am suggesting that it is a relatively simple enhancement to a bank’s ICAAP that has the potential to make banks more resilient (and transparent) with no obvious downsides.

Capital 101

In Capital 101, we learn that capital is meant to cover “unexpected loss” and that there is a neat division between expected and unexpected loss. The extract below from an early BCBS publication sets out the standard explanation …

Expected and unexpected credit loss

Figure 1 – Expected and Unexpected Loss

The BCBS publication from which this image is sourced explained that

“While it is never possible to know in advance the losses a bank will suffer in a particular year, a bank can forecast the average level of credit losses it can reasonably expect to experience. These losses are referred to as Expected Losses (EL) ….”

One of the functions of bank capital is to provide a buffer to protect a bank’s debt holders against peak losses that exceed expected levels… Losses above expected levels are usually referred to as Unexpected Losses (UL) – institutions know they will occur now and then, but they cannot know in advance their timing or severity….”

“An Explanatory Note on the Basel II IRB Risk Weight Functions” BCBS July 2005

There was a time when the Internal Ratings Based approach, combining some elegant theory and relatively simple math, seemed to have all the answers

  • A simple intuitive division between expected and unexpected loss
  • Allowing expected loss to be quantified and directly covered by risk margins in pricing while the required return on unexpected loss could be assigned to the cost of equity
  • A precise relationship between expected and unexpected loss, defined by the statistical parameters of the assumed loss distribution
  • The capacity to “control” the risk of unexpected loss by applying seemingly unquestionably strong confidence levels (i.e. typically 1:1000 years plus) to the measurement of target capital requirements
  • It even seemed to offer a means of neatly calibrating the capital requirement to the probability of default of your target debt rating (e.g. a AA senior debt rating with a 5bp probability of default = a 99.95% confidence level; QED)

If only it was that simple … but expected loss is still a good place to start

In practice, the inherently cyclical nature of banking means that the line between expected and unexpected loss is not always as simple or clear as represented above. It would be tempting to believe that the transition to expected loan loss accounting will bring greater transparency to this question but I doubt that is the case. Regulatory Expected Loss (REL) is another possible candidate but again I believe it falls short of what would be desirable for drawing the line that signals where we are increasingly likely to have crossed from the domain of the expected to the unexpected.

The problem (from a capital adequacy perspective) with both IFRS9 and REL is that the “expected” value still depends on the state of the credit cycle at the time we are taking its measure. REL incorporates a Downturn measure of Loss Given Default (DLGD) but the other inputs (Probability of Default and Exposure at Default) are average values taken across a cycle, not the values we expect to experience at the peak of the cycle downturn.

We typically don’t know exactly when the credit cycle will turn down, or by how much and how long, but we can reasonably expect that it will turn down at some time in the future. Notwithstanding the “Great Moderation” thesis that gained currency prior to the GFC, the long run of history suggests that it is dangerous to bet against the probability of a severe downturn occurring once every 15 to 25 years. Incorporating a measure into the Internal Capital Adequacy Process (ICAAP) that captures this aspect of expected loss provides a useful reference point and a potential trigger for reviewing why the capital decline has exceeded expectations.

Uncertainty is by definition not measurable

One of the problems with advanced model based approaches like IRB is that banks experience large value losses much more frequently than the models suggest they should. As a consequence, the seemingly high margins of safety implied by 1:1000 year plus confidence levels in the modelling do not appear to live up to their promise.

A better way of dealing with uncertainty

One of the core principles underpinning this proposal is that the boundary between risk (which can be measured with reasonable accuracy) and uncertainty (which can not be measured with any degree of precision) probably lies around the 1:25 year confidence level (what we usually label a “severe recession). I recognise that reasonable people might adopt a more conservative stance arguing that the zone of validity of credit risk models caps out at 1:15 or 1:20 confidence levels but I am reasonably confident that 1:25 defines the upper boundary of where credit risk models tend to find their limits. Each bank can makes its own call on this aspect of risk calibration.

Inside this zone of validity, credit risk models coupled with stress testing and sensitivity analysis can be applied to generate a reasonably useful estimate of expected losses and capital impacts. There is of course no guarantee that the impacts will not exceed the estimate, that is why we have capital. The estimate does however define the rough limits of what we can claim to “know” about our risk profile.

The “expected versus unexpected” distinction is all a bit abstract – why does it matter?

Downturn loss is part of the risk reward equation of banking and manageable, especially if the cost of expected downturn losses has already been built into credit risk spreads. Managing the risk is easier however if a bank’s risk appetite statement has a clear sense of:

  • exactly what kind of expected downturn loss is consistent with the specific types of credit risk exposure the risk appetite otherwise allows (i.e. not just the current exposure but also any higher level of exposure that is consistent with credit risk appetite) and
  • the impact this would be expected to have on capital adequacy.

This type of analysis is done under the general heading of stress testing for both credit risk and capital adequacy but I have not often seen evidence that banks are translating the analysis and insight into a specific buffer assigned the task of absorbing expected downturn losses and the associated negative impact on capital adequacy. The Cyclical Buffer I have outlined in this post offers a means of more closely integrating the credit risk management framework and the Internal Capital Adequacy Assessment Process (ICAAP).

What gets you into trouble …

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so”

Commonly, possibly mistakenly, attributed to Mark Twain

This saying captures an important truth about the financial system. Some degree of volatility is part and parcel of the system but one of the key ingredients in a financial crisis or panic is when participants in the system are suddenly forced to change their view of what is safe and what is not.

This is one of the reasons why I believe that a more transparent framework for tracking the transition from expected to truly unexpected outcomes can add to the resilience of the financial system. Capital declines that have been pre-positioned in the eyes of key stakeholders as part and parcel of the bank risk reward equation are less likely to be a cause for concern or trigger for panic.

The equity and debt markets will still revise their valuations in response but the debt markets will have less reason to question the fundamental soundness of the bank if the capital decline lies within the pre-positioned operating parameters defined by the target cyclical buffer. This will be especially so to the extent that the Capital Conservation Buffer provides substantial layers of additional buffer to absorb the uncertainty and buy time to respond to it.

Calibrating the size of the Cyclical Buffer

Incorporating a Cyclical Buffer does not necessarily mean that a bank needs to hold more capital. It is likely to be sufficient to simply partition a set amount of capital that bank management believes will absorb the expected impact of a cyclical downturn. The remaining buffer capital over minimum requirements exists to absorb the uncertainty and ensure that confidence sensitive liabilities are well insulated from the impacts of that uncertainty.

But first we have to define what we mean by “THE CYCLE”. This is a term frequently employed in the discussion of bank capital requirements but open to a wide range of interpretation.

A useful start to calibrating the size of this cyclical buffer is to distinguish:

  • An economic or business cycle; which seems to be associated with moderate severity, short duration downturns occurring once every 7 to 10 years, and
  • The “financial cycle” (to use a term suggested by Claudio Borio) where we expect to observe downturns of greater severity and duration but lower frequency (say once every 25 years or more).

Every bank makes its own decision on risk appetite but, given these two choices, mine would calibrated to, and hence resilient against, the less frequent but more severe and longer duration downturns associated with the financial cycle.

There is of course another layer of severity associated with a financial crisis. This poses an interesting challenge because it begs the question whether a financial crisis is the result of some extreme external shock or due to failures of risk management that allowed an endogenous build up of risk in the banking system. This kind of loss is I believe the domain of the Capital Conservation Buffer (CCB).

There is no question that banks must be resilient in the face of a financial crisis but my view is that this is a not something that should be considered an expected cost of banking.

Incorporating a cyclical buffer into the capital structure for an Australian D-SIB

Figure 2 below sets out an example of how this might work for an Australian D-SIB that has adopted APRA’s 10.5% CET1 “Unquestionably Strong”: benchmark as the basis of its target capital structure. These banks have a substantial layer of CET1 capital that is nominally surplus to the formal prudential requirements but in practice is not if the bank is to be considered “unquestionably strong” as defined by APRA. The capacity to weather a cyclical downturn might be implicit in the “Unquestionably Strong” benchmark but it is not transparent. In particular, it is not obvious how much CET1 can decline under a cyclical downturn while a bank is still deemed to be “Unquestionably Strong”.

Figure 2 – Incorporating a cyclical buffer into the target capital structure

The proposed Cyclical Buffer sits on top of the Capital Conservation Buffer and would be calibrated to absorb the increase in losses, and associated drawdowns on capital, expected to be experienced in the event of severe economic downturn. Exactly how severe is to some extent a question of risk appetite, unless of course regulators mandate a capital target that delivers a higher level of soundness than the bank would have chosen of its own volition.

In the example laid out in Figure 2, I have drawn the limit of risk appetite at the threshold of the Capital Conservation Buffer. This would be an 8% CET1 ratio for an Australian D-SIB but there is no fundamental reason for drawing the lone on risk appetite at this threshold. Each bank has the choice of tolerating some level of incursion into the CCB (hence the dotted line extension of risk appetite). What matters is to have a clear line beyond which higher losses and lower capital ratios indicate that something truly unexpected is driving the outcomes being observed.

What about the prudential Counter-Cyclical Capital Buffer?

I have deliberately avoided using the term”counter” cyclical in this proposal to distinguish this bank controlled Cyclical Buffer (CyB) from its prudential counterpart, the “Counter Cyclical Buffer” (CCyB), introduced under Basel III. My proposal is similar in concept to the variations on the CCyB being developed by the Bank of England and the Canadian OFSI. The RBNZ is also considering something similar in its review of “What counts as capital?” where it has proposed that the CCyB should have a positive value (indicatively set at 1.5%) at all times except following a financial crisis (see para 105 -112 of the Review Paper for more detail).

My proposal is also differentiated from its prudential counter part by the way in which the calibration of the size of the bank Cyclical Buffer offers a way for credit risk appetite to be more formally integrated with the Internal Capital Adequacy Process (ICAAP) that sets the overall target capital structure.

Summing up

  • Incorporating a Cyclical Buffer into the target capital structure offers a means of more closely integrating the risk exposure and capital adequacy elements of a bank’s risk appetite
  • A breach of the Cyclical Buffer creates a natural trigger point for reviewing whether the unexpected outcomes was due to an unexpectedly large external shock or was the result of credit exposure being riskier than expected or some combination of the two
  • The role of the Capital Conservation Buffer in absorbing the uncertainty associated with risk appetite settings is much clearer if management of cyclical expected loss is assigned to the Cyclical Buffer

What am I missing …

Tony

Will Expected Loss loan provisioning reduce pro cyclicality?

I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had  to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.

Borio’s position is that:

  • Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
  • But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
  • So there is a need to develop what he calls “capital filters” (a generic term encompassing   capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.

There are two ways to respond to Claudio Borio’s observations on this topic:

  1. One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
  2. The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions

Will ECL reduce the risk of pro cyclicality?

It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.

The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.

This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.

The solution to pro cyclicality lies in capital not expected loss

So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:

  • let accounting standards do what they do, don’t try to make them solve prudential problems;
  • construct a capital adequacy solution that complements the accounting based measurement of capital and profits.

Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.

In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.

Tony

Distinguishing luck and skill

Quantifying Luck’s Role in the Success Equation

“… we vastly underestimate the role of luck in what we see happening around us”

This post is inspired by a recent read of Michael Mauboussin’s book “The Success Equation: Untangling Skill and Luck in Business, Sports and Investing”. Mauboussin focuses on the fact that much of what we experience is a combination of skill and luck but we tend to be quite bad at distinguishing the two. It may not unlock the secret to success but, if you want to get better at untangling the contributions that skill and luck play in predicting or managing future outcomes, then this book still has much to offer.

“The argument here is not that you can precisely measure the contributions of skill and luck to any success or failure. But if you take concrete steps toward attempting to measure those relative contributions, you will make better decisions than people who think improperly about those issues or who don’t think about them at all.”

Structure wise, Mauboussin:

  • Starts with the conceptual foundations for thinking about the problem of distinguishing skill and luck,
  • Explores the analytical tools we can use to figure out the extent to which luck contributes to our achievements, successes and failures,
  • Finishes with some concrete suggestions about how to put the conceptual foundations and analytical tools to work in dealing with luck in decisions.

Conceptual foundations

It is always good to start by defining your terms; Mauboussin defines luck and skill as follows:

“Luck is a chance occurrence that affects a person or a group.. [and] can be good or bad [it] is out of one’s control and unpredictable”

Skill is defined as the “ability to use one’s knowledge effectively and readily in execution or performance.”

Applying the process that Mauboussin proposes requires that we first roughly distinguish where a specific activity or prediction fits on the continuum bookended by skill and luck. Mauboussin also clarifies that:

  • Luck and randomness are related but not the same: He distinguishes luck as operating at the level of the individual or small group while randomness operates at the level of the system where more persistent and reliable statistical patterns can be observed.
  • Expertise does not necessarily accumulate with experience: It is often assumed that doing something for a long time is sufficient to be an expert but Mauboussin argues that in activities that depend on skill, real expertise only comes about via deliberate practice based on improving performance in response to feedback on the ways in which the input generates the predicted outcome.

Mauboussin is not necessarily introducing anything new in his analysis of why we tend to bad at distinguishing skill and luck. The fact that people tend to struggle with statistics is well-known. The value for me in this book lies largely in his discussion of the psychological dimension of the problem which he highlights as exerting the most profound influence. The quote below captures an important insight that I wish I understood forty years ago.

“The mechanisms that our minds use to make sense of the world are not well suited to accounting for the relative roles that skill and luck play in the events we see taking shape around us.”

The role of ideas, beliefs and narratives is a recurring theme in Mauboussin’s analysis of the problem of distinguishing skill and luck. Mauboussin notes that people seem to be pre-programmed to want to fit events into a narrative based on cause and effect. The fact that things sometimes just happen for no reason is not a satisfying narrative. We are particularly susceptible to attributing successful outcomes to skill, preferably our own, but we seem to be willing to extend the same presumption to other individuals who have been successful in an endeavour. It is a good story and we love stories so we suppress other explanations and come to see what happened as inevitable.

Some of the evidence we use to create these narratives will be drawn from what happened in specific examples of the activity, while we may also have access to data averaged over a larger sample of similar events. Irrespective, we seem to be predisposed to weigh the specific evidence more heavily in our intuitive judgement than we do the base rate averaged over many events (most likely based on statistics we don’t really understand). That said, statistical evidence can still be “useful” if it “proves” something we already believe; we seem to have an intuitive bias to seek evidence that supports what we believe. Not only do we fail to look for evidence that disproves our narrative, we tend to actively suppress any contrary evidence we encounter.

Analytical tools for navigating the skill luck continuum

We need tools and processes to help manage the tendency for our intuitive judgements to lead us astray and to avoid being misled by arguments that fall into the same trap or, worse, deliberately exploit these known weaknesses in our decision-making process.

One process proposed by Mauboussin for distinguishing skill from luck is to:

  • First form a generic judgement on what the expected accuracy of our prediction is likely to be (i.e. make a judgement on where the activity sits on the skill-luck continuum)
  • Next look at the available empirical or anecdotal evidence, distinguishing between the base rate for this type of activity (if it exists) and any specific evidence to hand
  • Then employ the following rule:
    • if the expected accuracy of the prediction is low (i.e. luck is likely to be a significant factor), you should place most of the weight on the base rate
    • if the expected accuracy is high (i.e. there is evidence that skill plays the prime role in determining the outcome of what you are attempting to predict), you can rely more on the specific case.
  • use the data to test if the activity conforms to your original judgement of how skill and luck combine to generate the outcomes

Figuring out where the activity sits on the skill-luck continuum is the critical first step and Mauboussin offers three methods for undertaking this part of the process: 1) The “Three Question” approach, 2) Simulation and 3) True Score Theory. I will focus here on the first method which involves

  1. First ask if you can easily assign a cause to the effect you are seeking to predict. In some instances the relationship will be relatively stable and linear (and hence relatively easy to predict) whereas the results of other activities are shaped by complex dependencies such as cumulative advantage and social preference. Skill can play a part in both activities but luck is likely to be a more significant factor in the latter group.
  2. Determining the rate of reversion to the mean: Slow reversion is consistent with activities dominated by skill, while rapid reversion comes from luck being the more dominant influence. Note however that complex activities where cumulative advantage and social preference shape the outcome may not have a well-defined mean to revert to. The distribution of outcomes for these activities frequently conform to a power law (i.e. there are lots of small values and relatively few large values).
  3. Is there evidence that expert prediction is useful? When experts have wide disagreement and predict poorly, that is evidence that luck is a prime factor shaping outcomes.

One of the challenges with this process is to figure out how large a sample size you need to determine if there is a reliable relationship between actions and outcome that evidences skill.  Another problem is that a reliable base rate may not always be available. That may be because the data has just not been collected but also because a reliable base rate simply may not even exist.

The absence of a reliable base rate to guide decisions is a feature of activities that do not have simple linear relationships between cause and effect. These activities also tend to fall into Nassim Taleb’s “black swan” domain. The fundamental lesson in this domain of decision making is to be aware of the risks associated with naively applying statistical probability based methods to the problem. Paul Wilmott and David Orrell use the idea of a “zone of validity” to make the same point in “The Money Formula”.

The need to understand power laws and the mechanisms that generate them also stands out in Mauboussin’s discussion of untangling skill and luck.

The presence of a power law depends in part on whether events are dependent on, or independent of, one another. In dependent systems, initial conditions matter and come to matter more and more as time goes on. The final outcomes are (sometimes surprisingly) sensitive to both minor variations in the initial conditions and to the path taken over time. Mauboussin notes that a number of mechanisms are responsible for this phenomenon including preferential attachment, critical points and phase transitions are also crucial.

“In some realms, independence and bell-shaped distributions of luck can explain much of what we see. But in activities such as the entertainment industry, success depends on social interaction. Whenever people can judge the quality of an item by several different criteria and are allowed to influence one another’s choices, luck will play a huge role in determining success or failure.”

“For example, if one song happens to be slightly more popular than another at just the right time, it will tend to become even more popular as people influence one another. Because of that effect, known as cumulative advantage, two songs of equal quality, or skill, will sell in substantially different numbers. …  skill does play a role in success and failure, but it can be overwhelmed by the influence of luck. In the jar model, the range of numbers in the luck jar is vastly greater than the range of numbers in the skill jar.”

“The process of social influence and cumulative advantage frequently generates a distribution that is best described by a power law.”

“The term power law comes from the fact that an exponent (or power) determines the slope of the line. One of the key features of distributions that follow a power law is that there are very few large values and lots of small values. As a result, the idea of an “average” has no meaning.”

Mauboussin’s discussion of power laws does not offer this specific example but the idea that the average is meaningless is also true of loan losses when you are trying to measure expected loss over a full loan loss cycle. What we tend to observe is lots of relatively small values when economic conditions are benign and a few very large losses when the cycle turns down, probably amplified by endogenous factors embedded in bank balance sheets or business models. This has interesting and important implications for the concept of Expected Loss which is a fundamental component of the advanced Internal Rating Based approach to bank capital adequacy measurement.

Mauboussin concludes with a list of ten suggestions for untangling and navigating the divide between luck and skill:

  1. Understand where you are on the luck skill continuum
  2. Assess sample size, significance and swans
  3. Always consider a null hypothesis – is there some evidence that proves that my base  belief is wrong
  4. Think carefully about feedback and rewards; High quality feedback is key to high performance. Where skill is more important, then deliberate practice is essential to improving performance. Where luck plays a strong role, the focus must be on process
  5. Make use of counterfactuals; To maintain an open mind about the future, it is very useful to keep an open mind about the past. History is a narrative of cause and effect but it is useful to reflect on how outcomes might have been different.
  6. Develop aids to guide and improve your skill; On the luck side of the continuum, skill is still relevant but luck makes the outcomes more probabilistic. So the focus must be on good process – especially one that takes account of behavioural biases. In the middle of the spectrum, the procedural is combined with the novel. Checklists can be useful here – especially when decisions must be made under stress. Where skill matters, the key is deliberate practice and being open to feedback
  7. Have a plan for strategic interactions. Where your opponent is more skilful or just stronger, then try to inject more luck into the interaction
  8. Make reversion to the mean work for you; Understand why reversion to the mean happens, to what degree it happens, what exactly the mean is. Note that extreme events are unlikely to be repeated and most importantly, recognise that the rate of reversion to the mean relates to the coefficient of correlation
  9. Develop useful statistics (i.e.stats that are persistent and predictive)
  10. Know your limitations; we can do better at untangling skill and luck but also must recognise how much we don’t know. We must recognise that the realm may change such that old rules don’t apply and there are places where statistics don’t apply

All in all, I found Maubossin’s book very rewarding and can recommend it highly. Hopefully the above post does the book justice. I have also made some more detailed notes on the book here.

Tony

Minsky’s Financial Instability Hypothesis – Applications in Stress Testing?

One of the issues that we keep coming back to in stress testing is whether the financial system is inherently prone to instability and crisis or the system naturally tends towards equilibrium and instability is due to external shocks. Any stress scenario that we design, or that we are asked to model, will fall somewhere along this spectrum though I suspect most scenarios tend to be based on exogenous shocks. This touches on a long standing area of economic debate and hence not something that we can expect to resolve any time soon. I think it however useful to consider the question when conducting stress testing and evaluate the outcomes.

From roughly the early 1980’s until the GFC in 2008, the dominant economic paradigm has arguably been that the market forces, coupled with monetary and fiscal policy built on a sound understanding of how the economy works, meant that the business cycle was dead and that the primary challenge of policy was to engineer efficient capital allocations that maximised growth. The GFC obviously highlighted shortcomings with the conventional economic approach and drew attention to an alternative approach developed by Hyman Minsky which he labelled the Financial Instability Hypothesis.

Minsky’s Financial Instability Hypothesis (FIH)

Minsky focused on borrowing and lending with varying margins of safety as a fundamental property of all capitalist economies and identified three forms

  • “Hedge” financing under which cash flow covers the repayment of principal and interest
  • “Speculative” financing under which cash flow covers interest but the principal repayments must be continually refinanced
  • “Ponzi” financing under which cash flow is insufficient to cover either interest or principal and the borrower is betting that appreciation in the value of the asset being financed will be sufficient to repay loan principal plus capitalised interest and generate a profit

The terms that Minsky uses do not strictly conform to modern usage but his basic idea is clear; increasingly speculative lending tends to be associated with increasing fragility of borrowers and the financial system as a whole. Ponzi financing is particularly problematic because the system is vulnerable to external shocks that can result in restricted access to finance or which cause asset devaluation cycle as borrowers to sell their assets in order to reduce their leverage. The downward pressure on assets prices associated with the deleveraging process then puts further pressure on the capacity to repay the loans and so on.

The term “Minsky moment” has been used to describe the inflexion point where debt levels become unsustainable and asset prices fall as investors seek to deleverage. Investor psychology is obviously one of the primary drivers in this three stage cycle; investor optimism translates to a willingness to borrow and to pay more for assets, the higher asset valuations in turn allow lenders to lend more against set loan to valuation caps. Lenders can also be caught up in the mood of optimism and take on more risk (e.g. via higher Loan Valuation Ratio limits or higher debt service coverage ratios). Minsky stated that “the fundamental assertion of the financial instability hypothesis is that the financial structure evolves from being robust to being fragile over a period in which the economy does well” (Financial Crises: Systemic or Idiosyncratic by Hyman Minsky, April 1991, p16).

It should also be noted that a Minsky moment does not require an external shock, a simple change in investor outlook or risk tolerance could be sufficient to trigger the reversal. Minsky observed that the tendency of the endogenous process he described to lead to systemic fragility and instability is constrained by institutions and interventions that he described as “thwarting systems” (“Market Processes and Thwarting Systems” by P. Ferri and H. Minsky, November 1991, p2). However Minsky’s FIH also assumes that there is a longer term cycle in which these constraints are gradually wound back allowing more and more risk to accumulate in the system over successive business cycles.

What Minsky describes is similar to the idea of a long term “financial cycle” (25 years plus) being distinct from the shorter duration “business cycle” (typically 7-10 years) – refer this post “The financial cycle and macroeconomics: What have we learnt?” for more detail. An important feature of this longer term financial cycle is a process that gradually transforms the business institutions, decision-making conventions, and structures of market governance, including regulation, which contribute to the stability of capitalist economies.

The transformation process can be broken down into two components

  1. winding back of regulation and
  2. increased risk taking

which in combination increase both the supply of and demand for risk. The process of regulatory relaxation can take a number of forms:

  • One dimension is regulatory capture; whereby the institutions designed to regulate and reduce excessive risk-taking are captured and weakened
  • A second dimension is regulatory relapse; reduced regulation may be justified on the rationale that things are changed and regulation is no longer needed but there is often an ideological foundation typically based on economic theory (e.g. the “Great Moderation” or market discipline underpinning self-regulation).
  • A third dimension is regulatory escape; whereby the supply of risk is increased through financial innovation that escapes the regulatory net because the new financial products and practices were not conceived of when existing regulation was written.

Borrowers also take on more risk for a variety of reasons:

  • First, financial innovation provides new products that allow borrowers to take on more debt or which embed higher leverage inside the same nominal value of debt.
  • Second, market participants are also subject to gradual memory loss that increases their willingness to take on risk

The changing taste for risk is also evident in cultural developments which can help explain the propensity for investors to buy shares or property. A greater proportion of the population currently invest in shares than was the case for their parents or grandparents. These individual investors are actively engaged in share investing in a way that would be unimaginable for the generations that preceded them. Owning your own home and ideally an investment property as well is an important objective for many Australians but less important in say Germany.

These changes in risk appetite can also weaken market discipline based constraints against excessive risk-taking. A book titled “The Origin of Financial Crises” by George Cooper (April 2008) is worth reading if you are interested in the ideas outlined above. A collection of Minsky’s papers can also be found here  if you are interested in exploring his thinking more deeply.

I have been doing a bit of research lately both on the question of what exactly does Expected Loss “expect” and on the ways in which cycle downturns are defined. I may be missing something, but I find this distinction between endogenous and exogenous factors largely missing from the discussion papers that I have found so far and from stress testing itself. I would greatly appreciate some suggestions if anyone has come across any good material on the issue.

Tony