Shout out to Tim Harford for this introduction to the study of how, in his words, ignorance can be deliberately produced. The technical term “agnatology” is I suspect unlikely to catch on but the underlying message is one worth understanding. At a minimum it is a handy addition to your Scrabble dictionary.
The article was originally published in March 2017 but I only came across it recently via this podcast interview Harford did with Cardiff Garcia on “The New Bazaar”. The context in 2017 was the successful campaign for the US presidency that Donald Trump ran during 2016 with a bit of Brexit thrown in but this is a challenge that is not going away anytime soon.
Harford notes that it is tempting to think that the answer to the challenge posed by what has come to be known as a post truth society lies in a better process to establish the facts
The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts.
He affirms the need to have some agreement on how we distinguish facts from opinions and assertions but he cautions that this is unlikely to solve the problem. He cites the tobacco industry response to the early evidence that smoking causes cancer to illustrate why facts alone are not enough.
A good place to start is by delving into why facts alone are not enough – a few extracts from the article hopefully capture the main lessons
Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again…
Tempting as it is to fight lies with facts, there are three problems with that strategy…
The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind…
There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?…
In the war of ideas, boredom and distraction are powerful weapons. The endgame of these distractions is that matters of vital importance become too boring to bother reporting…
There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth…
The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe…
We see what we want to see — and we reject the facts that threaten our sense of who we are…
When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”…
Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees…
When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.
So what are we to do?
Harford cites a study that explores the value of scientific curiosity
What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”
It is of course entirely possible that Tim Harford’s assessment is just calling to my own bias. I will admit that one the things that I always looked for when hiring, or working, with people was curiosity. These people are surprisingly rare but (IMHO) worth their weight in gold. An intellectually curious mind makes up for a lot of other areas where the person might not be perfect in terms of skills or experience. The general point (I think) also ties to the often cited problem that people with lots of knowledge can sometimes be prone to not being so street smart. Nassim Taleb makes this argument in nearly everything he writes.
So Tim Harford might not be offering the entire answer but I think his article is worth reading on two counts
Firstly as a cautionary tale against expecting that all debates and disputes can be resolved by simply establishing the “facts”
Secondly as a reminder of the power of a curious mind and the value of the never-ending search for “what am I missing?”
A BIS paper titled “Green Swan 2 – Climate change and Covid-19: reflections on efficiency versus resilience” initially caught my attention because of the reference to the tension between efficiency versus resilience. This tension is, for me at least, one of the issues that has tended to be ignored in the pursuit of growth and optimised solutions. The papers mainly deal with the challenges that climate change creates for central banks but I think there are also some insights to be drawn on what it means for bank capital management.
A core argument in the paper is that challenges like climate change and pandemics ….
“… require us to rethink the trade-offs between efficiency and resilience of our socio-economic systems … one way to address this issue is to think about buffers or some necessary degree of redundancy for absorbing such large shocks. Countries build FX reserves, banks maintain capital buffers as required by regulators, and so on. Perhaps similar “buffers” could be used in other areas of our societies. For example, could it be time to reassess our production systems, which are meant to be lean and less costly for maximum efficiency?”
There is a lot of content in the combined papers but the points that resonated the most with me were
Climate change shares some of the features of a Black Swan event but is better thought of a distinct type of risk which the authors label a “Green Swan”.
Green swan problems are created in part by choices we have made regarding the value of efficiency over resilience – part of the solution lies in rethinking these choices but this will not be easy.
Climate change is a “collective action” problem which cannot be addressed by individual actors (including banks) operating independently – market based solutions like a carbon price may also be insufficient to bring about a solution that does not involve an unacceptable level of financial disruption.
Scenario analysis (including stress testing) appears to be one of the better tools for dealing with climate change and similar types of risk – but it needs to be used differently (by both the supervised and the supervisors) from the way it is applied to conventional risks.
I am not an expert on climate change modelling, but Chapter 3 of the second paper also has what looks to be a useful overview of the models used to analyse climate change and how the outputs of these models are used to generate economic impacts.
Black, white and green swans
Climate change clearly operates in the domain of radical uncertainty. As such it shares some common elements with “black swan” events; in particular the fact that conventional risk models and analysis are not well suited to measuring and managing the potential adverse impacts. It is equally important however to understand the ways in which climate change differs from a classic black swan event. There is a longer list but the ones that I found most relevant were:
Predictability – Black Swans are, by definition, not predictable whereas the potential for adverse Climate Change outcomes is well understood even if not universally accepted. The point is that understanding the potential for adverse impact means we have a choice about what to do about it.
Impact – Black Swan events can have substantial impacts but the system can recover (e.g. the GFC has left a lasting impact but economic activity did recover once the losses were absorbed). The impacts of climate change, in contrast, may be irreversible and have the potential to result in people dying in large numbers.
Given the conceptual differences, the authors classify Climate Change as a distinct form which they label a “Green Swan”. To the best of my knowledge, this may be the first time the term has been used in this way. That said, the general point they are making seems to be quite similar to what other authors have labelled as “Grey Rhinos” or “Black Elephants” (the latter an obvious allusion to the “elephant in the room”, a large risk that is visible to everyone but no one wants to address).
A typology of swans
Categorising climate risk
The papers distinguish two main channels through which climate change can affect financial stability – physical risks and transition risks.
Physical risks are defined as
… “those risks that arise from the interaction of climate-related hazards […] with the vulnerability of exposure to human and natural systems” (Batten et al (2016)). They represent the economic costs and financial losses due to increasing frequency and severity of climate-related weather events (eg storms, floods or heat waves) and the effects of long-term changes in climate patterns (eg ocean acidification, rising sea levels or changes in precipitation). The losses incurred by firms across different financial portfolios (eg loans, equities, bonds) can make them more fragile.
Transition risks are defined as those
“… associated with the uncertain financial impacts that could result from a rapid low-carbon transition, including policy changes, reputational impacts, technological breakthroughs or limitations, and shifts in market preferences and social norms.
A rapid and ambitious transition to lower emissions, for example, would obviously be desirable from the perspective of addressing climate change but might also mean that a large fraction of proven reserves of fossil fuel cannot be extracted, becoming “stranded assets”. The write down of the value of these assets may have potentially systemic consequences for the financial system. This transition might occur in response to policy changes or by virtue of some technological breakthrough (e.g. problem of generating cheap energy by nuclear fusion is solved).
Efficiency versus resilience
I started this post with a quote from the first (shorter) paper regarding the way in which the Covid 19 had drawn attention to the extent to which the pursuit of efficiency had made our economies more fragile. The paper explores the ways in which the COVID 19 pandemic exhibits many of the same features that we see in the climate change problem and how the global response to the COVID 19 pandemic might offer some insights into how we should respond to climate change.
The paper is a useful reminder of the nature of the problem but I am less confident that it offers a solution that will work without some form of regulation or public sector investment in the desired level of redundancy. The paper cites bank capital buffers introduced post GFC as an example of what to do but this was a regulated outcome that would most likely not be acceptable for non-financial companies in countries that remain committed to free market ideology.
The Economist published an article on this question that offered numerous examples of similar problems that illustrate the propensity of “humanity, at least as represented by the world’s governments … to ignore them until forced to react” .
If recent weeks have shown us anything, it’s that the world is not just flat. It’s fragile.
And we’re the ones who made it that way with our own hands. Just look around. Over the past 20 years, we’ve been steadily removing man-made and natural buffers, redundancies, regulations and norms that provide resilience and protection when big systems — be they ecological, geopolitical or financial — get stressed. We’ve been recklessly removing these buffers out of an obsession with short-term efficiency and growth, or without thinking at all.
The New York Times, 30 May 2020
Managingcollective action problems
The second paper, in particular, argues that it is important to improve our understanding of the costs of climate change and to ensure that these costs are incorporated into the prices that drive the resources we allocate to dealing with the challenge (e.g. via a carbon price or tax). However one of its key conclusions is that relying on markets to solve the problem is unlikely to be sufficient even with the help of some form of carbon price that reflects a more complete account of the costs of our current carbon based economy.
In short, the development and improvement of forward-looking risk assessment and climate- related regulation will be essential, but they will not suffice to preserve financial stability in the age of climate change: the deep uncertainty involved and the need for structural transformation of the global socioeconomic system mean that no single model or scenario can provide sufficient information to private and public decision-makers. A corollary is that the integration of climate-related risks into prudential regulation and (to the extent possible) into monetary policy would not suffice to trigger a shift capable of hedging the whole system again against green swan events.
The green swan: Central banking and financial stability in the age of climate change; Chapter 5 (page 66)
Using scenario based methodologies to assess climate related risks
Both papers highlight the limitations of trying to measure and understand climate change using conventional probability based risk management tools. The one area they do see as worth pursuing is using scenario based approaches. This makes sense to me but it is also important to distinguish this kind of analysis from the standard stress testing used to help calibrate capital buffers.
The standard application of stress testing takes a severe but plausible macro economic scenario such as a severe recession and determines what are the likely impacts on capital adequacy ratios. This offers a disciplined way of deciding how much capital surplus is required to support the risk appetite choices a bank has made in pursuit of its business objectives.
A simplistic application of climate based stress testing scenarios might take the same approach; i.e. work out how much the scenario impacts the capital and ensure that the buffer is sufficient to absorb the impact. That I think is not the right conclusion and my read of the BIS papers is that they are not advocating that either. The value of the scenario based modelling is to first get a handle on the size of the problem and how exposed the bank is to it. A capital response may be required but the answer may also be to change the nature of your exposure to the risk. That may involve reduced risk limits but it may also involve active participation in collective action to address the underlying problem. A capital management response may be part of the solution but it is far from the first step.
I have only scratched the surface of this topic in this post but the two papers it references are worth reading if you are interested in the question of what climate change, and related Green Swan or Black Elephant problems, mean for the banking system and for central banking. There is a bit more technical detail in the appendix below but it is likely only of interest for people working at the sharp end of trying to measure and manage the problem.
I want to dig deeper into the question of how you use stress testing to assess climate change and related types of risk but that is a topic best left for another post.
Tony – From the outside
Appendix – Modelling the impacts of climate change
Section 3 of the longer paper (“Measuring climate-related risks with scenario-based approaches”) discusses the limitations of the models that are typically used to generate estimates of the ecological and financial impacts of climate change scenarios. There is plenty of material there for climate sceptics but it also assists true believers to understand the limits of what they can actually know and how coming to terms with the radical uncertainty of how climate change plays out shapes the nature of our response.
I have copied some extracts from the chapter below that will give you a flavour of what it has to say. It is pretty technical so be warned …
“… the standard approach to modelling financial risk consisting in extrapolating historical values (eg PD, market prices) is no longer valid in a world that is fundamentally reshaped by climate change (Weitzman (2011), Kunreuther et al (2013)). In other words, green swan events cannot be captured by traditional risk management.
The current situation can be characterised as an “epistemological obstacle” (Bachelard (1938)). The latter refers to how scientific methods and “intellectual habits that were useful and healthy” under certain circumstances, can progressively become problematic and hamper scientific research. Epistemological obstacles do not refer to the difficulty or complexity inherent to the object studied (eg measuring climate-related risks) but to the difficulty related to the need of redefining the problem”
nothing less than an epistemological break (Bachelard, 1938) or a “paradigm shift” (Kuhn (1962)) is needed today to overcome this obstacle and more adequately approach climate-relate risks (Pereira da Silva (2019a)).
In fact, precisely an epistemological break may be taking place in the financial sector: recently emerged methodologies aim to assess climate-related risks while relying on the fundamental hypothesis that, given the lack of historical financial data related to climate change and the deep uncertainty involved, new approaches based on the analysis of prospective scenarios are needed. Unlike probabilistic approaches to financial risk management, they seek to set up plausible hypotheses for the future. This can help financial institutions integrate climate-related risks into their strategic and operational procedures (eg for the purpose of asset allocation, credit rating or insurance underwriting) and financial supervisors assess the vulnerability of specific institutions or the financial system as a whole
Climate-economic models and forward-looking risk analysis are important and can still be improved, but they will not suffice to provide all the information required to hedge against “green swan” events.
As a result of these limitations, two main avenues of action have been proposed. We argue that they should be pursued in parallel rather than in an exclusive manner. First, central banks and supervisors could explore different approaches that can better account for the uncertain and nonlinear features of climate-related risks. Three particular research avenues (see Box 5 below) consist in: (i) working with non- equilibrium models; (ii) conducting sensitivity analyses; and (iii) conducting case studies focusing on specific risks and/or transmission channels. Nevertheless, the descriptive and normative power of these alternative approaches remain limited by the sources of deep and radical uncertainty related to climate change discussed above. That is, the catalytic power of scenario-based analysis, even when grounded in approaches such as non-equilibrium models, will not be sufficient to guide decision-making towards a low-carbon transition.
As a result of this, the second avenue from the perspective of maintaining system stability consists in “going beyond models” and in developing more holistic approaches that can better embrace the deep or radical uncertainty of climate change as well as the need for system-wide action (Aglietta and Espagne (2016), Barmes (2019), Chenet et al (2019a), Ryan-Collins (2019), Svartzman et al (2019)).
Pages 42 – 43
Embracing deep or radical uncertainty therefore calls for a second “epistemological break” to shift from a management of risks approach to one that seeks to assure the resilience of complex adaptive systems in the face of such uncertainty (Fath et al (2015), Schoon and van der Leeuw (2015)).38 In this view, the current efforts aimed at measuring, managing and supervising climate-related risks will only make sense if they take place within a much broader evolution involving coordination with monetary and fiscal authorities, as well as broader societal changes such as a better integration of sustainability into financial and economic decision-making.
Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed are those of the authors, and are not necessarily those of the Bank of England, or its policy committees. Posting on this blog, Adam Brinley Codd and Andrew Gimber argue that false confidence in people’s ability to calculate probabilities of rare events might end up worsening the crises regulators are trying to prevent.
The post concludes with their personal observations about how best to deal with this meta-uncertainty.
Policymakers couldavoid talking about probabilities altogether. Instead of a 1-in-X event, the Bank of England’s Annual Cyclical Scenario is described as a “coherent ‘tail risk’ scenario”.
Policymakers could avoid some of the cognitive biases that afflict people’s thinking about low-probability events, byrephrasing low-probability events in terms of less extreme numbers. A“100-year” floodhas a 1% chance of happening in any given year, but anyone who lives into their 70s is more likely than not to see one in their lifetime.
Policymakers could be vocal about the fact that there are worse outcomes beyond the 1-in-X point of the distribution.
I have been wanting to put something down on the question of Australian major bank ROE for a while. The issue generates a lot of heat but the public discussion I have observed has been truncated, in my opinion, by misconceptions.
I think we can agree that banks need to be profitable to be healthy and a healthy banking system underpins the health of the economy as a whole. Excessive profitability however is clearly bad for consumers, business and for the economy as a whole. The problem is determining what level of profitability is excessive. This post is unlikely to be the final word on this topic but hopefully it introduces a couple of considerations that seem to me to be largely missing from the public debate.
Most of what I read on this topic seems to treat the ROE of the Australian majors as self evidently excessive and focuses on what to do about it. Exhibit A is the reported ROE which in the 2019 half year updates varied from 10.05% to 14.10%. This is much less than it was but still substantially better than what is being achieved by most banks outside Australia and by the smaller local banks. Exhibit B is the fact that the Australian banking system is an oligopoly which almost by definition earn excess profits.
Reported ROE exceeds COE – case closed
Any discussion of ROE must be anchored by the estimated Cost of Equity (COE), the minimum return that investors require to hold equity risk. There are a variety of ways of calculating this but all of them generate a number that is much less than the ROE the majors currently earn. So case closed.
There is no question that the Australian majors cover their cost of equity, but it is less clear to me that the margin of excess profitability is as excessive as claimed.
Corporate finance 101 teaches us that we can derive a company’s cost of equity using the Capital Asset Pricing Model (CAPM) which holds that the required return is equal to the Risk Free Return plus the Equity Risk Premium (ERP) multiplied by the extent to which the return the individual stock is correlated with the market as a whole. The general idea of being paid a premium for taking on equity risk makes sense but there are a bunch of issues with the CAPM once you get into the detail. One of the more topical being what do you do when the risk free rate approaches zero.
I don’t want to get into the detail of those issues here but will assume for the purposes of this post that a rate of return in the order of 8-10% can be defended as a minimum acceptable return. I recognise that some of the more mechanical applications of the CAPM might generate a figure lower than this if they simply apply a fixed ERP to the current risk free rate.
Two reasons why a simple comparison of ROE and COE may be misleading
Banking is an inherently cyclical business and long term investors require a return that compensates them for accepting this volatility in returns.
Book value does not define market value
Banking is a highly cyclical business – who knew?
It is often asserted that banking is a low risk, “utility” style business and hence that shareholders should expect commensurately low returns. The commentators making these assertions tend to focus on the fact that the GFC demonstrated that it is difficult (arguably impossible) to allow large banks to fail without imposing significant collateral damage on the rest of the economy. Banks receive public sector support to varying degrees that reduces their risk of failure and hence the risk to shareholders. A variation of this argument is that higher bank capital requirements post the GFC have reduced the risk of investing in a bank by reducing the risk of insolvency.
There is no question that banks do occupy a privileged space in the economy due to the central bank liquidity support that is not available to other companies. This privilege (sometimes referred to as a “social licence”) is I think an argument for tempering the kinds of ROE targeted by the banks but it does not necessarily make them a true utility style investment whose earnings are largely unaffected by cyclical downturns.
The reality is that bank ROE will vary materially depending on the state of the credit cycle and this inherent cyclicality is probably accentuated by accounting for loan losses and prudential capital requirements. Loan losses for Australian banks are currently (October 2019) close to their cyclical low points and can be expected to increase markedly when the economy eventually moves into a downturn or outright recession. Exactly how much downside in ROE we can expect is open to debate but history suggests that loan losses could easily be 5 times higher than what we observe under normal economic conditions.
There is also the issue of how often this can be expected to happen. Again using history as a guide for the base rate, it seems that downturns might be expected every 7-10 years on average and long periods without a downturn seem to be associated with increased risk of more severe and prolonged periods of reduced economic activity.
What kind of risk premium does an investor require for this cyclicality? The question may be academic for shareholders who seek to trade in and out of bank stocks based on their view of the state of the cycle but I will assume that banks seek to cater to the concerns and interests of long term shareholders. The answer for these shareholders obviously depends on how frequent and how severe you expect the downturns to be, but back of the envelope calculations suggest to me that you would want ROE during the benign part of the credit cycle to be at least 200bp over the COE and maybe 300bp to compensate for the cyclical risk.
Good risk management capabilities can mitigate this inherent volatility but not eliminate it; banks are inherently cyclical investments on the front line of the business cycle. Conversely, poor risk management or an aggressive growth strategy can have a disproportionately negative impact. It follows that investors will be inclined to pay a premium to book value for banks they believe have good risk management credentials. I will explore this point further in the discussion of book value versus market value.
Book Value versus Market Value
Apart from the cyclical factors discussed above, the simple fact that ROE is higher than COE is frequently cited as “proof” that ROE is excessive. It is important however to examine the unstated assumption that the market value of a bank should be determined by the book value of its equity. To the best of my knowledge, there is no empirical or conceptual basis for this assumption. There are a number of reasons why a company’s share price might trade at a premium or a discount to its book value as prescribed by the relevant accounting standards.
The market may be ascribing value to assets that are not recognised by the accounting standards.The money spent on financial control and risk management, for example, is largely expensed and hence not reflected in the book value of equity. That value however becomes apparent when the bank is under stress. These “investments” cannot eliminate the inherent cyclicality discussed above but they do mitigate those risk.
A culture built on sound risk management and financial control capabilities is difficult to value and won’t be reflected in book value except to the extent it results in conservative valuation and provisioning outcomes. It is however worth something. Investors will pay a premium for the banks they believe have these intangible strengths while discounting or avoiding altogether the shares of banks they believe do not.
Summing up …
This post is in no way an exhaustive treatment of the topic. Its more modest objective was simply to offer a couple of issues to consider before jumping to the conclusion that the ROE earned by the large Australian banks is excessive based on simplistic comparisons of point in time ROE versus mechanical derivations of the theoretical COE.
As always, it is entirely possible that I am missing something – if so let me know what it is ….
We probably tend to take the monetary and financial system we have today for granted, somewhat like the air we breathe. I was also challenged during the week to describe a non-money future and my response was judged a failure to look outside the square. The best I could offer was to note that Star Trek imagines a society in which unlimited cheap energy coupled with replicators has made money redundant.
By chance, I came across a couple of articles in recent weeks that offer interesting perspectives on what money is and its role in the economy.
The Bretton Woods agreement of course is not the system we have today but Cowen makes the point that the system we operate under today would appear equally unlikely to previous generations:
“Currencies are fiat, the ties to gold are gone, and most exchange rates for the major currencies are freely floating, with periodic central bank intervention to manipulate exchange rates. For all the criticism it receives, this arrangement has also proved to be a viable global monetary order, and it has been accompanied by an excellent overall record for global growth.
Yet this fiat monetary order might also have seemed, to previous generations of economists, unlikely to succeed. Fiat currencies were associated with the assignat hyperinflations of the French Revolution, the floating exchange rates and competitive devaluations of the 1920s were not a success, and it was hardly obvious that most of the world’s major central banks would pursue inflation targets of below 2%. Until recent times, the record of floating fiat currencies was mostly disastrous”
Cowen’s main message is that the lesson of history suggests that it is brave to assume that the monetary and financial institution status quo will hold forever – so what comes next?
This brings us to Stefan Heidenreich.
“Stefan Heidenreich believes that some day, money will seem like an ancient religion. In his recent book Money: For a Non-money Economy, the German philosopher and media theorist speculates on how the money-based global economy could soon transition to an entirely different system based on the algorithmic matching of goods and services. Such a system could match people with what they need at a given moment without relying on the concept of a stable, universal price — and, just possibly, do away with the vast inequities caused by the market.
If you find the idea of an economy without money hard to imagine, you’re not alone. As the saying goes, it’s easier to imagine the end of the world than the end of capitalism. But that very difficulty proves Heidenreich’s main point: We have to imagine what may sound like wild possibilities now in order to steer the future before it’s upon us. Getting rid of money could lead to what he calls a “leftist utopia” of equal distribution — or it could enable mass surveillance and algorithmic control on a whole new scale. Faced with the second option, Heidenreich says, we have no choice but to try to envision the first.”
It is not obvious to me that Heidenreich’s “matching” proposal provides a workable alternative to what we have today but that is not the point. The bigger point raised by both Cowen and Heidenreich is that what we have today is unlikely to be the system that governs our economic interactions in 50 years time so what is the alternative?
This article in Bloomberg caught my attention. It is a background piece on a team known as the “Applied Critical Thinking” unit that has been operating inside the New York Federal Reserve since 2016.
The general idea of contrarian thinking and recognising the limitations of what is and is not knowable are not huge innovations in themselves. What was interesting for me is the extent to which this unit can be thought of as a way of building that thought process into the structure of organisations that might otherwise tend towards consensus and groupthink built on simple certainties.
I don’t have all the answers but this initiative by the NY Fed is I think worth watching. Something like this seem to me to have the potential to help address some of the culture problems that have undermined trust in large companies (it is not just the banks) and the financial system as a whole.
This post sets out a case for a bank choosing to incorporate a discretionary Cyclical Buffer (CyB) into its Internal Capital Adequacy Assessment Process (ICAAP). The size of the buffer is a risk appetite choice each individual bank must make. The example I have used to illustrate the idea is calibrated to absorb the expected impact of an economic downturn that is severe but not necessarily a financial crisis style event. My objective is to illustrate the ways in which incorporating a Cyclical Buffer in the target capital structure offers:
an intuitive connection between a bank’s aggregate risk appetite and its target capital structure;
a means of more clearly defining the point where losses transition from expected to unexpected; and
a mechanism that reduces both the pro cyclicality of a risk sensitive capital regime and the tendency for the transition to unexpected losses to trigger a loss of confidence in the bank.
The value of improved clarity, coherence and consistency in the risk appetite settings is I think reasonably self evident. The need for greater clarity in the distinction between expected and unexpected loss perhaps less so. The value of this Cyclical Buffer proposal ultimately depends on its capacity to enhance the resilience of the capital adequacy regime in the face of economic downturns without compromising its risk sensitivity.
There are no absolutes when we deal with what happens under stress but I believe a Cyclical Buffer such as is outlined in this post also has the potential to help mitigate the risk of loss of confidence in the bank when losses are no longer part of what stakeholders expect but have moved into the domain of uncertainty. I am not suggesting that this would solve the problem of financial crisis. I am suggesting that it is a relatively simple enhancement to a bank’s ICAAP that has the potential to make banks more resilient (and transparent) with no obvious downsides.
In Capital 101, we learn that capital is meant to cover “unexpected loss” and that there is a neat division between expected and unexpected loss. The extract below from an early BCBS publication sets out the standard explanation …
Expected and unexpected credit loss
The BCBS publication from which this image is sourced explained that
“While it is never possible to know in advance the losses a bank will suffer in a particular year, a bank can forecast the average level of credit losses it can reasonably expect to experience. These losses are referred to as Expected Losses (EL) ….”
One of the functions of bank capital is to provide a buffer to protect a bank’s debt holders against peak losses that exceed expected levels… Losses above expected levels are usually referred to as Unexpected Losses (UL) – institutions know they will occur now and then, but they cannot know in advance their timing or severity….”
“An Explanatory Note on the Basel II IRB Risk Weight Functions” BCBS July 2005
There was a time when the Internal Ratings Based approach, combining some elegant theory and relatively simple math, seemed to have all the answers
A simple intuitive division between expected and unexpected loss
Allowing expected loss to be quantified and directly covered by risk margins in pricing while the required return on unexpected loss could be assigned to the cost of equity
A precise relationship between expected and unexpected loss, defined by the statistical parameters of the assumed loss distribution
The capacity to “control” the risk of unexpected loss by applying seemingly unquestionably strong confidence levels (i.e. typically 1:1000 years plus) to the measurement of target capital requirements
It even seemed to offer a means of neatly calibrating the capital requirement to the probability of default of your target debt rating (e.g. a AA senior debt rating with a 5bp probability of default = a 99.95% confidence level; QED)
If only it was that simple … but expected loss is still a good place to start
The problem (from a capital adequacy perspective) with both IFRS9 and REL is that the “expected” value still depends on the state of the credit cycle at the time we are taking its measure. REL incorporates a Downturn measure of Loss Given Default (DLGD) but the other inputs (Probability of Default and Exposure at Default) are average values taken across a cycle, not the values we expect to experience at the peak of the cycle downturn.
We typically don’t know exactly when the credit cycle will turn down, or by how much and how long, but we can reasonably expect that it will turn down at some time in the future. Notwithstanding the “Great Moderation” thesis that gained currency prior to the GFC, the long run of history suggests that it is dangerous to bet against the probability of a severe downturn occurring once every 15 to 25 years. Incorporating a measure into the Internal Capital Adequacy Process (ICAAP) that captures this aspect of expected loss provides a useful reference point and a potential trigger for reviewing why the capital decline has exceeded expectations.
One of the problems with advanced model based approaches like IRB is that banks experience large value losses much more frequently than the models suggest they should. As a consequence, the seemingly high margins of safety implied by 1:1000 year plus confidence levels in the modelling do not appear to live up to their promise.
A better way of dealing with uncertainty
One of the core principles underpinning this proposal is that the boundary between risk (which can be measured with reasonable accuracy) and uncertainty (which can not be measured with any degree of precision) probably lies around the 1:25 year confidence level (what we usually label a “severe recession). I recognise that reasonable people might adopt a more conservative stance arguing that the zone of validity of credit risk models caps out at 1:15 or 1:20 confidence levels but I am reasonably confident that 1:25 defines the upper boundary of where credit risk models tend to find their limits. Each bank can makes its own call on this aspect of risk calibration.
Inside this zone of validity, credit risk models coupled with stress testing and sensitivity analysis can be applied to generate a reasonably useful estimate of expected losses and capital impacts. There is of course no guarantee that the impacts will not exceed the estimate, that is why we have capital. The estimate does however define the rough limits of what we can claim to “know” about our risk profile.
The “expected versus unexpected” distinction is all a bit abstract – why does it matter?
Downturn loss is part of the risk reward equation of banking and manageable, especially if the cost of expected downturn losses has already been built into credit risk spreads. Managing the risk is easier however if a bank’s risk appetite statement has a clear sense of:
exactly what kind of expected downturn loss is consistent with the specific types of credit risk exposure the risk appetite otherwise allows (i.e. not just the current exposure but also any higher level of exposure that is consistent with credit risk appetite) and
the impact this would be expected to have on capital adequacy.
This type of analysis is done under the general heading of stress testing for both credit risk and capital adequacy but I have not often seen evidence that banks are translating the analysis and insight into a specific buffer assigned the task of absorbing expected downturn losses and the associated negative impact on capital adequacy. The Cyclical Buffer I have outlined in this post offers a means of more closely integrating the credit risk management framework and the Internal Capital Adequacy Assessment Process (ICAAP).
What gets you into trouble …
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so”
Commonly, possibly mistakenly, attributed to Mark Twain
This saying captures an important truth about the financial system. Some degree of volatility is part and parcel of the system but one of the key ingredients in a financial crisis or panic is when participants in the system are suddenly forced to change their view of what is safe and what is not.
This is one of the reasons why I believe that a more transparent framework for tracking the transition from expected to truly unexpected outcomes can add to the resilience of the financial system. Capital declines that have been pre-positioned in the eyes of key stakeholders as part and parcel of the bank risk reward equation are less likely to be a cause for concern or trigger for panic.
The equity and debt markets will still revise their valuations in response but the debt markets will have less reason to question the fundamental soundness of the bank if the capital decline lies within the pre-positioned operating parameters defined by the target cyclical buffer. This will be especially so to the extent that the Capital Conservation Buffer provides substantial layers of additional buffer to absorb the uncertainty and buy time to respond to it.
Calibrating the size of the Cyclical Buffer
Incorporating a Cyclical Buffer does not necessarily mean that a bank needs to hold more capital. It is likely to be sufficient to simply partition a set amount of capital that bank management believes will absorb the expected impact of a cyclical downturn. The remaining buffer capital over minimum requirements exists to absorb the uncertainty and ensure that confidence sensitive liabilities are well insulated from the impacts of that uncertainty.
But first we have to define what we mean by “THE CYCLE”. This is a term frequently employed in the discussion of bank capital requirements but open to a wide range of interpretation.
A useful start to calibrating the size of this cyclical buffer is to distinguish:
An economic or business cycle; which seems to be associated with moderate severity, short duration downturns occurring once every 7 to 10 years, and
Every bank makes its own decision on risk appetite but, given these two choices, mine would calibrated to, and hence resilient against, the less frequent but more severe and longer duration downturns associated with the financial cycle.
There is of course another layer of severity associated with a financial crisis. This poses an interesting challenge because it begs the question whether a financial crisis is the result of some extreme external shock or due to failures of risk management that allowed an endogenous build up of risk in the banking system. This kind of loss is I believe the domain of the Capital Conservation Buffer (CCB).
There is no question that banks must be resilient in the face of a financial crisis but my view is that this is a not something that should be considered an expected cost of banking.
Incorporating a cyclical buffer into the capital structure for an Australian D-SIB
Figure 2 below sets out an example of how this might work for an Australian D-SIB that has adopted APRA’s 10.5% CET1 “Unquestionably Strong”: benchmark as the basis of its target capital structure. These banks have a substantial layer of CET1 capital that is nominally surplus to the formal prudential requirements but in practice is not if the bank is to be considered “unquestionably strong” as defined by APRA. The capacity to weather a cyclical downturn might be implicit in the “Unquestionably Strong” benchmark but it is not transparent. In particular, it is not obvious how much CET1 can decline under a cyclical downturn while a bank is still deemed to be “Unquestionably Strong”.
The proposed Cyclical Buffer sits on top of the Capital Conservation Buffer and would be calibrated to absorb the increase in losses, and associated drawdowns on capital, expected to be experienced in the event of severe economic downturn. Exactly how severe is to some extent a question of risk appetite, unless of course regulators mandate a capital target that delivers a higher level of soundness than the bank would have chosen of its own volition.
In the example laid out in Figure 2, I have drawn the limit of risk appetite at the threshold of the Capital Conservation Buffer. This would be an 8% CET1 ratio for an Australian D-SIB but there is no fundamental reason for drawing the lone on risk appetite at this threshold. Each bank has the choice of tolerating some level of incursion into the CCB (hence the dotted line extension of risk appetite). What matters is to have a clear line beyond which higher losses and lower capital ratios indicate that something truly unexpected is driving the outcomes being observed.
What about the prudential Counter-Cyclical Capital Buffer?
I have deliberately avoided using the term”counter” cyclical in this proposal to distinguish this bank controlled Cyclical Buffer (CyB) from its prudential counterpart, the “Counter Cyclical Buffer” (CCyB), introduced under Basel III. My proposal is similar in concept to the variations on the CCyB being developed by the Bank of England and the Canadian OFSI. The RBNZ is also considering something similar in its review of “What counts as capital?” where it has proposed that the CCyB should have a positive value (indicatively set at 1.5%) at all times except following a financial crisis (see para 105 -112 of the Review Paper for more detail).
My proposal is also differentiated from its prudential counter part by the way in which the calibration of the size of the bank Cyclical Buffer offers a way for credit risk appetite to be more formally integrated with the Internal Capital Adequacy Process (ICAAP) that sets the overall target capital structure.
Incorporating a Cyclical Buffer into the target capital structure offers a means of more closely integrating the risk exposure and capital adequacy elements of a bank’s risk appetite
A breach of the Cyclical Buffer creates a natural trigger point for reviewing whether the unexpected outcomes was due to an unexpectedly large external shock or was the result of credit exposure being riskier than expected or some combination of the two
The role of the Capital Conservation Buffer in absorbing the uncertainty associated with risk appetite settings is much clearer if management of cyclical expected loss is assigned to the Cyclical Buffer
I may not always agree with everything they have to say, but there are a few people who reliably produce content and ideas worth reading, Andy Haldane is one and Claudio Borio is another (see previous posts on Haldane here and Borio here for examples of their work). So I was interested to read what Borio had to say about the introduction of Expected Credit Loss (ECL) provisioning. ECL is one of those topic that only interests the die-hard bank capital and credit tragics but I believe it has the potential to create some problems in the real world some way down the track.
Borio’s position is that:
Relative to the “incurred loss” approach to credit risk that precedes it, the new standard is likely to mitigate pro cyclicality to some extent;
But it will not be sufficient on its own to eliminate the risk of adverse pro cyclical impacts on the real economy;
So there is a need to develop what he calls “capital filters” (a generic term encompassing capital buffers and other tools that help mitigate the risk of pro cyclicality) that will work in conjunction with, and complement, the operation of the loan loss provisions in managing credit risk.
There are two ways to respond to Claudio Borio’s observations on this topic:
One is to take issue with his view that Expected Credit Loss provisioning will do anything at all to mitigate pro cyclicality;
The second is to focus on his conclusion that ECL provisioning by itself is not enough and that a truly resilient financial system requires an approach that complements loan provisions
Will ECL reduce the risk of pro cyclicality?
It is true that, relative to the incurred loss model, the ECL approach will allow loan loss provisions to be put in place sooner (all other things being equal). In scenarios where banks have a good handle on deteriorating economic conditions, then it does gives more freedom to increase provisions without the constraint of this being seen to be a cynical device to “smooth” profits.
The problem I see in this assessment is that the real problems with the adequacy of loan provisioning occur when banks (and markets) are surprised by the speed, severity and duration of an economic downturn. In these scenarios, the banks may well have more ECL provisions than they would otherwise have had, but they will probably still be under provisioned.
This will be accentuated to the extent that the severity of the downturn is compounded by any systematic weakness in the quality of loans originated by the banks (or other risk management failures) because bank management will probably be blind to these failures and hence slow to respond. I don’t think any form of Expected Loss can deal with this because we have moved from expected loss to the domain of uncertainty.
The solution to pro cyclicality lies in capital not expected loss
So the real issue is what to do about that. Borio argues that, ECL helps, but you really need to address the problem via what he refers to as “capital filters” (what we might label as counter cyclical capital buffers though that term is tainted by the failure of the existing system to do much of practical value thus far). On this part of his assessment, I find myself in violent agreement with him:
let accounting standards do what they do, don’t try to make them solve prudential problems;
construct a capital adequacy solution that complements the accounting based measurement of capital and profits.
Borio does not offer any detail on exactly what these capital solutions might look like, but the Bank of England and the OFSI are working on two options that I think are definitely worth considering.
In the interim, the main takeaway for me is that ECL alone is not enough on its own to address the problem of pro cyclicality and, more importantly, it is dangerous to think it can.