Small banks …

This post by Cetier on the RBNZ Financial Stability Report poses an interesting question about the future of small banks. He notes that the big banks seem to be doing fine but that small NZ banks are struggling to cover their cost of capital. This disparity between big and small banks also seems to be feature of the Australian banking system. It also looks like big banks in the USA are getting bigger at the expense of the small banks.

There is a perennial question of whether small banks need some support (possibly in the form of less onerous regulation) so that they can offer a source of competition to the larger banks. This is a policy choice that the USA has very deliberately made but it has been argued that this is one of the factors that contributed to the recent spate of bank failures.

This is part of a larger conversation about the tension between competition and financial stability. Marc Rubinstein did a good post on this question which I covered here.

I don’t have any answers but the question is one that I think will get more focus as the US considers its response to the most recent case studies in why banks fail. I don’t have enough expertise on the US Banking system to offer an informed opinion but the Bankers Policy Institute does offer an alternative perspective that argues that the failures were more a question of bad management and lax supervision than of regulation per se. I can say that the risks these US banks were running did seem to clearly violate the principles of Banking 101.

Let me know what I am missing …

Tony – From the Outside

Predicting phase transitions

I am not sure the modelling methodology described in this article is quite as good as the title suggests…

“Chaos Researchers Can Now Predict Perilous Points of No Return”

… but it would be very interesting if it lives up to the claims made in the article. It is a quick read and the subject matter seems worth keeping an eye on.

Here are two short extracts to give you a flavour of the claims made

A custom-built machine learning algorithm can predict when a complex system is about to switch to a wildly different mode of behavior.

In a series of recent papers, researchers have shown that machine learning algorithms can predict tipping-point transitions in archetypal examples of such “nonstationary” systems, as well as features of their behavior after they’ve tipped. The surprisingly powerful new techniques could one day find applications in climate science, ecology, epidemiology and many other fields.

Tony – From the Outside

What Michael Lewis loves about experts

This link takes you to the last of a 7 episode podcast Michael Lewis has done on the role of experts

podcasts.apple.com/au/podcast/against-the-rules-with-michael-lewis/id1455379351

The whole series is highly recommended but I especially like this quote in which he attempts to sum up the series

“Life eventually humbles us all. What I love about experts, the best of them anyway, is that they get to their humility early, they have to, it’s part of who they are, it’s necessary for what they are doing. They set out to get to the bottom of something that has no bottom, and so they are reminded, constantly, of what they don’t know. They move through the world focussed not on what they know but on what they might find out.”

In that spirit, let me know what I am missing

Tony – From the Outside

The Paradox of DeFi

Nic Carter and Linda Jeng have produced a useful paper titled “DeFi Protocol Risks: the Paradox of DeFi” that explores the risks that DeFi will need to address and navigate if it is to deliver on the promises that they believe it can. There is of course plenty of scepticism about the potential for blockchain and DeFi to change the future of finance (including from me). What makes this paper interesting is that it is written by two people involved in trying to make the systems work as opposed to simply throwing rocks from the sidelines.

Linda Jeng has a regulatory back ground but is currently the Global Head of Policy at Transparent Financial Systems. Nic is a General Partner at a seed-stage venture capital film that invests in blockchain related businesses. The paper they have written will contribute a chapter to a book being edited by Bill Coen (former Secretary General of the Basel Committee on Banking Supervision) and Diane Maurice to be titled “Regtech, Suptech and Beyond: Innovation and Technology in Financial Services” (RiskBooks).

Linda and Nic conceptually bucket DeFi risks into five categories: 

  1. interconnections with the traditional financial system, 
  2. operational risks stemming from underlying blockchains, 
  3. smart contract-based vulnerabilities, 
  4. other governance and regulatory risks, and 
  5. scalability challenges.

… and map out the relationships in this schematic

Conclusion: “No Free Lunch”

The paper concludes around the long standing principle firmly entrenched in the traditional financial world – there is “no free lunch”. Risk can be transformed but it is very hard to eliminate completely. Expressed another way, there is an inherent trade off in any system between efficiency and resilience.

Many of the things that make DeFi low cost and innovative also create operational risk and other challenges. Smart contracts sound cool, but when you frame them as “automated, hard-to-intervene contracts” it is easy to see they can also amplify risks. Scalability is identified as an especially hard problem if you are not willing to compromise on the principles that underpinned the original DeFI vision.

The paper is worth a read but if you are time poor then you can also read a short version via this post on Linda Jeng’s blog. Izabella Kaminska (FT Alphaville) also wrote about the paper here.

Tony – From the Outside

What Can We Learn from a Big Boat Stuck in a Canal?

Interesting post by Matt Stoller on the broader policy issues associated with the current problem in the Sues Canal.

Here is a short extract capturing the main idea …

“Industrial crashes, in other words, are happening in unpredictable ways throughout the economy, shutting down important production systems in semi-random fashion. Such collapses were relatively rare prior to the 1990s. But industrial crashes were built into the nature of our post-1990s production system, which prioritizes efficiency over resiliency. Just as ships like the Ever Given are bigger and more efficient, they are also far riskier. And this tolerance for risk is a pattern reproducing itself far beyond the shipping industry; we’ve off shored production and then consolidated that production in lots of industries, like semiconductors, pharmaceutical precursors, vitamin C, and even book printing.

What is new isn’t the vulnerability of the Suez Canal as a chokepoint, it’s that we’ve intentionally created lots of other artificial chokepoints. And since our production systems have little fat, these systems are tightly coupled, meaning a shortage in one area cascades throughout the global economy, costing us time, money, and lives.”

Irrespective of whether you agree with the solutions he proposes, I think the point he makes (i.e. the tension between efficiency and resilience and the systemic problem with systems that are “tightly coupled”) is a very real issue. We saw this play out in the financial system in 2008 and we saw it play out in global supply chains in 2020. There are differing views on whether the measures have gone far enough but the financial system has been substantially re-engineered to make it more resilient. It remains to be seen how global supply chains will evolve in response to the problems experienced.

Link to the post here

https://mattstoller.substack.com/p/what-we-can-learn-from-a-big-boat

Tony – From the Outside

APRA reflects on “… a subtle but important shift in regulatory thinking”

Wayne Byres speech to the Risk Management Association covered a range of developments but, for me, the important part was the discussion of the distinction between strength and resilience referenced in the title of this post.

This extract from the speech sets out how Mr Byres frames the distinction …

… in the post-GFC period, the emphasis of the international reforms was on strengthening the global financial system. Now, the narrative is how to improve its resilience. A perusal of APRA speeches and announcements over time shows a much greater emphasis on resilience in more recent times as well.

What is behind this shift? Put simply, it is possible to be strong, but not resilient. Your car windscreen is a great example – without doubt it is a very strong piece of glass, but one small crack and it is irreparably damaged and ultimately needs to be replaced. That is obviously not the way we want the financial system to be. We want a system that is able to absorb shocks, even from so-called “black swan” events, and have the means to restore itself to full health.

In saying that, financially strong balance sheets undoubtedly help provide resilience, and safeguarding financial strength will certainly remain the cornerstone of prudential regulation and supervision. But it is not the full story. So with that in mind, let me offer some quick reflections on the past year, and what it has revealed about opportunities for the resilience of the financial system to be further improved.

APRA Chair Wayne Byres – Speech to the 2020 Forum of the Risk Management Association – 3 December 2020

To my mind, the introduction of an increased emphasis on resilience is absolutely the right way to go. We saw some indications of the direction APRA intend to pursue in the speech that Mr Byres gave to the AFR Banking and Wealth Summit last month and will get more detail next week (hopefully) when APRA releases a consultation paper setting out a package of bank capital reforms that is likely to include a redesign of the capital buffer framework.

This package of reforms is one to watch. To the extent that it delivers on the promise of increasing the resilience of the Australian banking system, it is potentially as significant as the introduction of the “unquestionably strong” benchmark in response to the Australian Financial System Inquiry.

Tony – From the Outside

Climate change – a central banking perspective

A BIS paper titled “Green Swan 2 – Climate change and Covid-19: reflections on efficiency versus resilience” initially caught my attention because of the reference to the tension between efficiency versus resilience. This tension is, for me at least, one of the issues that has tended to be ignored in the pursuit of growth and optimised solutions. The papers mainly deal with the challenges that climate change creates for central banks but I think there are also some insights to be drawn on what it means for bank capital management.

A core argument in the paper is that challenges like climate change and pandemics ….

“… require us to rethink the trade-offs between efficiency and resilience of our socio-economic systems … one way to address this issue is to think about buffers or some necessary degree of redundancy for absorbing such large shocks. Countries build FX reserves, banks maintain capital buffers as required by regulators, and so on. Perhaps similar “buffers” could be used in other areas of our societies. For example, could it be time to reassess our production systems, which are meant to be lean and less costly for maximum efficiency?”

The paper draws on a (much longer and more technical) BIS research paper titled “The green swan: Central banking and financial stability in the age of climate change”. Both papers contain the usual caveat that the views expressed do not necessarily reflect those of their respective institutions. With that warning noted, this post draws on both papers to make some observations about what the papers say, and what this means for bank capital management.

There is a lot of content in the combined papers but the points that resonated the most with me were

  1. Climate change shares some of the features of a Black Swan event but is better thought of a distinct type of risk which the authors label a “Green Swan”.
  2. Green swan problems are created in part by choices we have made regarding the value of efficiency over resilience – part of the solution lies in rethinking these choices but this will not be easy.
  3. Climate change is a “collective action” problem which cannot be addressed by individual actors (including banks) operating independently – market based solutions like a carbon price may also be insufficient to bring about a solution that does not involve an unacceptable level of financial disruption.
  4. Scenario analysis (including stress testing) appears to be one of the better tools for dealing with climate change and similar types of risk – but it needs to be used differently (by both the supervised and the supervisors) from the way it is applied to conventional risks.

I am not an expert on climate change modelling, but Chapter 3 of the second paper also has what looks to be a useful overview of the models used to analyse climate change and how the outputs of these models are used to generate economic impacts.

Black, white and green swans

Climate change clearly operates in the domain of radical uncertainty. As such it shares some common elements with “black swan” events; in particular the fact that conventional risk models and analysis are not well suited to measuring and managing the potential adverse impacts. It is equally important however to understand the ways in which climate change differs from a classic black swan event. There is a longer list but the ones that I found most relevant were:

  1. Predictability – Black Swans are, by definition, not predictable whereas the potential for adverse Climate Change outcomes is well understood even if not universally accepted. The point is that understanding the potential for adverse impact means we have a choice about what to do about it.
  2. Impact – Black Swan events can have substantial impacts but the system can recover (e.g. the GFC has left a lasting impact but economic activity did recover once the losses were absorbed). The impacts of climate change, in contrast, may be irreversible and have the potential to result in people dying in large numbers.

Given the conceptual differences, the authors classify Climate Change as a distinct form which they label a “Green Swan”. To the best of my knowledge, this may be the first time the term has been used in this way. That said, the general point they are making seems to be quite similar to what other authors have labelled as “Grey Rhinos” or “Black Elephants” (the latter an obvious allusion to the “elephant in the room”, a large risk that is visible to everyone but no one wants to address).

A typology of swans
Categorising climate risk

The papers distinguish two main channels through which climate change can affect financial stability – physical risks and transition risks.

Physical risks are defined as

… “those risks that arise from the interaction of climate-related hazards […] with the vulnerability of exposure to human and natural systems” (Batten et al (2016)). They represent the economic costs and financial losses due to increasing frequency and severity of climate-related weather events (eg storms, floods or heat waves) and the effects of long-term changes in climate patterns (eg ocean acidification, rising sea levels or changes in precipitation). The losses incurred by firms across different financial portfolios (eg loans, equities, bonds) can make them more fragile.

Transition risks are defined as those

“… associated with the uncertain financial impacts that could result from a rapid low-carbon transition, including policy changes, reputational impacts, technological breakthroughs or limitations, and shifts in market preferences and social norms.

A rapid and ambitious transition to lower emissions, for example, would obviously be desirable from the perspective of addressing climate change but might also mean that a large fraction of proven reserves of fossil fuel cannot be extracted, becoming “stranded assets”. The write down of the value of these assets may have potentially systemic consequences for the financial system. This transition might occur in response to policy changes or by virtue of some technological breakthrough (e.g. problem of generating cheap energy by nuclear fusion is solved).

Efficiency versus resilience

I started this post with a quote from the first (shorter) paper regarding the way in which the Covid 19 had drawn attention to the extent to which the pursuit of efficiency had made our economies more fragile. The paper explores the ways in which the COVID 19 pandemic exhibits many of the same features that we see in the climate change problem and how the global response to the COVID 19 pandemic might offer some insights into how we should respond to climate change.

The paper is a useful reminder of the nature of the problem but I am less confident that it offers a solution that will work without some form of regulation or public sector investment in the desired level of redundancy. The paper cites bank capital buffers introduced post GFC as an example of what to do but this was a regulated outcome that would most likely not be acceptable for non-financial companies in countries that remain committed to free market ideology.

The Economist published an article on this question that offered numerous examples of similar problems that illustrate the propensity of “humanity, at least as represented by the world’s governments … to ignore them until forced to react” .

Thomas Friedman’s article (“How we broke the world”) is also worth reading on this question …

If recent weeks have shown us anything, it’s that the world is not just flat. It’s fragile.

And we’re the ones who made it that way with our own hands. Just look around. Over the past 20 years, we’ve been steadily removing man-made and natural buffers, redundancies, regulations and norms that provide resilience and protection when big systems — be they ecological, geopolitical or financial — get stressed. We’ve been recklessly removing these buffers out of an obsession with short-term efficiency and growth, or without thinking at all.

The New York Times, 30 May 2020
Managing collective action problems

The second paper, in particular, argues that it is important to improve our understanding of the costs of climate change and to ensure that these costs are incorporated into the prices that drive the resources we allocate to dealing with the challenge (e.g. via a carbon price or tax). However one of its key conclusions is that relying on markets to solve the problem is unlikely to be sufficient even with the help of some form of carbon price that reflects a more complete account of the costs of our current carbon based economy.

In short, the development and improvement of forward-looking risk assessment and climate- related regulation will be essential, but they will not suffice to preserve financial stability in the age of climate change: the deep uncertainty involved and the need for structural transformation of the global socioeconomic system mean that no single model or scenario can provide sufficient information to private and public decision-makers. A corollary is that the integration of climate-related risks into prudential regulation and (to the extent possible) into monetary policy would not suffice to trigger a shift capable of hedging the whole system again against green swan events.

The green swan: Central banking and financial stability in the age of climate change; Chapter 5 (page 66)
Using scenario based methodologies to assess climate related risks

Both papers highlight the limitations of trying to measure and understand climate change using conventional probability based risk management tools. The one area they do see as worth pursuing is using scenario based approaches. This makes sense to me but it is also important to distinguish this kind of analysis from the standard stress testing used to help calibrate capital buffers.

The standard application of stress testing takes a severe but plausible macro economic scenario such as a severe recession and determines what are the likely impacts on capital adequacy ratios. This offers a disciplined way of deciding how much capital surplus is required to support the risk appetite choices a bank has made in pursuit of its business objectives.

A simplistic application of climate based stress testing scenarios might take the same approach; i.e. work out how much the scenario impacts the capital and ensure that the buffer is sufficient to absorb the impact. That I think is not the right conclusion and my read of the BIS papers is that they are not advocating that either. The value of the scenario based modelling is to first get a handle on the size of the problem and how exposed the bank is to it. A capital response may be required but the answer may also be to change the nature of your exposure to the risk. That may involve reduced risk limits but it may also involve active participation in collective action to address the underlying problem. A capital management response may be part of the solution but it is far from the first step.

Conclusion

I have only scratched the surface of this topic in this post but the two papers it references are worth reading if you are interested in the question of what climate change, and related Green Swan or Black Elephant problems, mean for the banking system and for central banking. There is a bit more technical detail in the appendix below but it is likely only of interest for people working at the sharp end of trying to measure and manage the problem.

I want to dig deeper into the question of how you use stress testing to assess climate change and related types of risk but that is a topic best left for another post.

Tony – From the outside

Appendix – Modelling the impacts of climate change

Section 3 of the longer paper (“Measuring climate-related risks with scenario-based approaches”) discusses the limitations of the models that are typically used to generate estimates of the ecological and financial impacts of climate change scenarios. There is plenty of material there for climate sceptics but it also assists true believers to understand the limits of what they can actually know and how coming to terms with the radical uncertainty of how climate change plays out shapes the nature of our response.

I have copied some extracts from the chapter below that will give you a flavour of what it has to say. It is pretty technical so be warned …

“… the standard approach to modelling financial risk consisting in extrapolating historical values (eg PD, market prices) is no longer valid in a world that is fundamentally reshaped by climate change (Weitzman (2011), Kunreuther et al (2013)). In other words, green swan events cannot be captured by traditional risk management.

The current situation can be characterised as an “epistemological obstacle” (Bachelard (1938)). The latter refers to how scientific methods and “intellectual habits that were useful and healthy” under certain circumstances, can progressively become problematic and hamper scientific research. Epistemological obstacles do not refer to the difficulty or complexity inherent to the object studied (eg measuring climate-related risks) but to the difficulty related to the need of redefining the problem”

Page 21

nothing less than an epistemological break (Bachelard, 1938) or a “paradigm shift” (Kuhn (1962)) is needed today to overcome this obstacle and more adequately approach climate-relate risks (Pereira da Silva (2019a)).

In fact, precisely an epistemological break may be taking place in the financial sector: recently emerged methodologies aim to assess climate-related risks while relying on the fundamental hypothesis that, given the lack of historical financial data related to climate change and the deep uncertainty involved, new approaches based on the analysis of prospective scenarios are needed. Unlike probabilistic approaches to financial risk management, they seek to set up plausible hypotheses for the future. This can help financial institutions integrate climate-related risks into their strategic and operational procedures (eg for the purpose of asset allocation, credit rating or insurance underwriting) and financial supervisors assess the vulnerability of specific institutions or the financial system as a whole

Climate-economic models and forward-looking risk analysis are important and can still be improved, but they will not suffice to provide all the information required to hedge against “green swan” events.

As a result of these limitations, two main avenues of action have been proposed. We argue that they should be pursued in parallel rather than in an exclusive manner. First, central banks and supervisors could explore different approaches that can better account for the uncertain and nonlinear features of climate-related risks. Three particular research avenues (see Box 5 below) consist in: (i) working with non- equilibrium models; (ii) conducting sensitivity analyses; and (iii) conducting case studies focusing on specific risks and/or transmission channels. Nevertheless, the descriptive and normative power of these alternative approaches remain limited by the sources of deep and radical uncertainty related to climate change discussed above. That is, the catalytic power of scenario-based analysis, even when grounded in approaches such as non-equilibrium models, will not be sufficient to guide decision-making towards a low-carbon transition.

As a result of this, the second avenue from the perspective of maintaining system stability consists in “going beyond models” and in developing more holistic approaches that can better embrace the deep or radical uncertainty of climate change as well as the need for system-wide action (Aglietta and Espagne (2016), Barmes (2019), Chenet et al (2019a), Ryan-Collins (2019), Svartzman et al (2019)). 

Pages 42 – 43

Embracing deep or radical uncertainty therefore calls for a second “epistemological break” to shift from a management of risks approach to one that seeks to assure the resilience of complex adaptive systems in the face of such uncertainty (Fath et al (2015), Schoon and van der Leeuw (2015)).38 In this view, the current efforts aimed at measuring, managing and supervising climate-related risks will only make sense if they take place within a much broader evolution involving coordination with monetary and fiscal authorities, as well as broader societal changes such as a better integration of sustainability into financial and economic decision-making.

Page 48

Why we fail to prepare for disasters

Tim Harford (The Undercover Economist) offers a short and readable account here of some of the reasons why, faced with clear risks, we still fail to act. We can see the problem, typically one of many, but don’t do enough to manage or mitigate the risk. New Orleans’ experiences with severe weather events features prominently as does (not surprisingly) COVID 19.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

Why we fail to prepare for disasters; Tim Harford (The Undercover Economist)

Another big part of the problem is that the cost of being fully prepared can be more than we are willing to pay. Especially when there is continuous pressure to find cost economies in the here and now

Serious scenarios are useful, but … no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

So the key points I took from his post:

  • Sometimes it can be something genuinely new and unexpected (i.e. Black Swan events) but risks we are well aware of can be equally damaging
  • Part of the problem is that we are social animals and take our cues from what the rest of the herd is doing (“normalcy bias” or “negative panic”)
  • Even where we understand the statistics and know that someone will be impacted, we tend to assume it will be someone else or someone else’s family (“optimism bias”)
  • We are especially bad at understanding risks that have an exponential driver (“exponential myopia”)
  • We are also quite good at finding reasons to justify ignoring risks we want to ignore or otherwise find inconvenient (“wishful thinking”)
  • Last, but far from least, efficiency is the enemy of resilience.

We need to remember that most of the factors listed above can also be useful in many other contexts (arguably most of the time). A tendency not to panic can be pretty useful and optimism has helped dreamers and ordinary people achieve many great things that have benefited the herd. Efficiency as a rule seems like a good thing to strive for.

Harford does not offer any easy answers but his post touches on issues that I have also been considering in Kay and King’s book titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. I have done a couple of posts on that book already (here and here) and am working on a final one that focuses on Chapters 8-16 which set out their ideas for how we navigate a world prone to radical uncertainty.

Tony – From the Outside

The Bankers’ New Clothes: Arguments for simpler capital and much reduced leverage

It always pays to make sure you expose yourself to the opposite view. This post looks at some of the arguments for simpler and higher bank capital requirements put forward by Professors Admati and Hellwig. They have published a number of papers and a book on the topic but this post refers chiefly to their book “The Bankers’ New Clothes” and to a paper ‘The Parade of the Banker’s New Clothes Continues: 31 Flawed Claims Debunked”. As I understand it, the key elements of their argument are that:

  • Banks are inherently risky businesses,
  • Excessive borrowing by banks increases their inherent riskiness, but
  • Banks are only able to maintain this excessive level of borrowing because
    • Flawed risk based capital models underestimate the true capital requirements of the business
    • Market discipline also allows excessive borrowing because it is assumed that the government will bail out banks if the situation turns out badly

They identify a variety of ways of dealing with the problem of excessive leverage (controls on bank lending, liquidity requirements and capital requirements) but argue that substantially more common equity is the best solution because:

  • It directly reduces the probability that a bank will fail (i.e. all other things being equal, more common equity reduces the risk of insolvency),
  • A higher level of solvency protection has the added benefit of also reducing the risk of illiquidity, and
  • Contrary to claims by the banking industry, there is no net cost to society in holding more common equity because the dilution in ROE will be offset by a decline in the required return on equity

They concede that there will be some cost associated with unwinding the Too Big To Fail (TBTF) benefit that large banks currently enjoy on both the amount banks can borrow and on the cost of that funding but argue there is still no net cost to society in unwinding this undeserved subsidy. The book, in particular, gets glowing reviews for offering a compelling case for requiring banks to operate with much lower levels of leverage and for pointing out the folly of risk based capital requirements.

There are a number of areas where I find myself in agreement with the points they argue but I can’t make the leap to accept their conclusion that much a higher capital requirement based on a simple leverage ratio calculation is the best solution. I have written this post to help me think through the challenges they offer my beliefs about how banks should be capitalised.

It is useful, I think, to first set out the areas where we (well me at least) might agree in principle with what they say; i.e.

  • Financial crises clearly do impose significant costs on society and excessive borrowing does tend to make a financial system fragile (the trick is to agree what is “excessive”)
  • Better regulation and supervision have a role to play in minimising the risk of bank failure (i.e. market discipline alone is probably not enough)
  • Public policy should consider all costs, not just those of the banking industry
  • All balance sheets embody a trade-off between enterprise risk, return and leverage (i.e. increasing leverage does increase risk)

It is less clear however that:

  • The economics of bank financing are subject to exactly the same rules as that which apply to non-financial companies (i.e. rather than asserting that banks should be compared with non-financial companies, it is important to understand how banks are different)
  • A policy of zero failure for banks is necessarily the right one, or indeed even achievable (i.e. would it be better to engineer ways in which banks can fail without dragging the economy down with them)
  • Fail safe mechanisms, such as the bail in of pre-positioned liabilities, have no prospect of working as intended
  • The assertion that “most” of the new regulation intended to make banks safer and easier to resolve has been “rejected, diluted or delayed” is a valid assessment of what has actually happened under Basel III
  • That liquidity events requiring lender of last resort support from the central bank are always a solvency problem

Drawing on some previous posts dealing with these issues (see here, here and here), I propose to focus on the following questions:

  • How does the cost of bank financing respond to changes in leverage?
  • Are the risk based capital requirements as fundamentally flawed as the authors claim?
  • Are risk management incentives for bankers always better when they are required to hold increasing levels of common equity?
  • Do the increased loss absorption features of Basel III compliant hybrids (in particular, the power to trigger conversion or bail in of the instruments) offer a way to impose losses on failed banks without disrupting the economy or requiring public support

How does leverage affect the cost of bank financing?

Increasing the proportion of equity funding, the authors argue, reduces the risk that shareholders are exposed to because each dollar of equity they have invested

“ will be affected less intensely by the uncertainty associated with the investments”

“when shareholders bear less risk per dollar invested, the rate of return they require is lower”

“Therefore, taking the costs of equity as fixed and independent of the mix of equity and debt involves a fundamental fallacy”.

Banker’sNew Clothes (p101)

The basic facts they set out are not really contentious; the mix of debt and equity does impact required returns. The authors focus on what happens to common equity but changing leverage impacts both debt and equity. This is very clear in the way that rating agencies consider all of the points nominated by the authors when assigning a debt rating. Reduced equity funding will likely lead to a decline in the senior and subordinated debt ratings and higher costs (plus reduced access to funding in absolute dollar terms) while higher equity will be a positive rating factor.

Banks are not immune to these fundamental laws but it is still useful to understand how the outcomes are shaped by the special features of a bank balance sheet. My views here incorporate two of the claims they “debunk” in their paper; specifically

Flawed Claim #4: The key insights from corporate finance about the economics of funding, including those of Modigliani and Miller, are not relevant for banks because banks are different from other companies

Flawed Claim #5: Banks are special because they create money

One of the features that defines a bank is the ability to take deposits. The cost of deposits however tends to be insulated from the effects of leverage. This is a design feature. Bank deposits are a major component of the money supply but need to be insensitive to adverse information about the issuing bank to function as money.

Wanting bank deposits to be information insensitive does not make them so. That is a function of their super senior position in the liability loss hierarchy, supplemented in many, if not most, banking systems by some form of limited deposit insurance (1). I credit a paper by Gary Gorton and George Pennacchi titled “Financial Intermediaries and Liquidity Creation” for crytalising this insight (an earlier post offers a short summary of that paper). Another paper titled “Why Bail-In? And How?” by Joseph Sommer proposes a different rationale for deposits having a super senior position insulated from the risk of insolvency but the implications for the impact of leverage on bank financing costs are much the same.

A large bank also relies on senior unsecured financing. This class of funding is more risky than deposits but still typically investment grade. This again is a design feature. Large banks target an investment grade rating in order to deliver, not only competitive financing costs, but equally (and perhaps more importantly) access to a larger pool of potential funding over a wider range of tenors. The investment grade rating depends of course on there being sufficient loss absorbing capital underwriting that outcome. There is no escaping this law of corporate finance. 

The debt rating of large banks is of course also tied up with the issue of banks being treated as Too Big To Fail (TBTF). That is a distortion in the market that needs to be addressed and the answer broadly is more capital though the rating agencies are reasonably agnostic on the form this capital should take in so far as the senior debt rating is concerned. Subject to having enough common equity anchoring the capital structure, more Tier 2 subordinated debt (or Tier 3 bail-in) will work just as well as more common equity for the purposes of reducing the value of implied government support currently embedded in the long term senior debt rating.

Admati and Hellwig are right – there is no free lunch in corporate finance

At this stage, all of this risk has to go somewhere. On that point I completely agree with Admati and Hellwig. There is no free lunch, the rating/risk of the senior tranches of financing depend on having enough of the right kinds of loss absorbing capital standing before them in the loss hierarchy. Where I part company is on the questions of how much capital is enough and what form it should take.

How much capital is (more than) enough?

Admati and Hellwig’s argument for more bank capital has two legs. Firstly, they note that banks are typically much more leveraged than industrial companies and question how can this be given the fundamental law of capital irrelevancy defined by Modigliani and Miller. Secondly, they argue that risk based capital requirements are fundamentally flawed and systematically under estimate how much capital is required.

Why are banks different?

Admati and Hellwig note that banks have less capital than industrial companies and conclude that this must be a result of the market relying on the assumption that banks will be bailed out. The existence of a government support uplift in the senior debt ratings of large banks is I think beyond debate. There is also broad support (even amongst many bankers) that this is not sound public policy and should ideally be unwound.

It is not obvious however that this wholly explains the difference in observed leverage. Rating agency models are relatively transparent in this regard (S&P in particular) and the additional capital required to achieve a rating uplift equivalent to the existing government support factor would still see banks more leveraged than the typical industrial company. Bank balance sheets do seem to be different from those of industrial companies.

Flawed risk models

The other leg to their argument is that risk based capital fundamentally under estimates capital requirements. I am broadly sympathetic to the sceptical view on how to use the outputs of risk models and have been for some time. An article I wrote in 2008, for example, challenged the convention of using a probability of default associated with the target debt rating to precisely calibrate the amount of capital a bank required.

The same basic concept of highly precise, high confidence level capital requirements is embedded in the Internal Ratings Based formula and was part of the reason the model results were misinterpreted and misused. Too many people assigned a degree of precision to the models that was not warranted. That does not mean however that risk models are totally useless.

Professors Admati and Hellwig use simple examples (e.g. how does the risk of loss increase if a personal borrower increases leverage on a home loan) to argue that banks need to hold more capital. While the basic principle is correct (all other things equal, leverage does increase risk), the authors’ discussion does not draw much (or possibly any?) attention to the way that requiring a borrower to have equity to support their borrowing reduces a bank’s exposure to movements in the value of the loan collateral.

In the examples presented, any decline in the value of the assets being financed flows through directly to the value of equity, with the inference that this would be true of a bank also. In practice, low risk weights assigned by banks to certain (low default – well secured) pools of lending reflect the existence of borrower’s equity that will absorb the first loss before the value of the loan itself is called into question.

A capital requirement for residential mortgages (typically one of the lowest risk weights and also most significant asset classes) that looks way too low when you note that house prices can easily decline by 10 or 20%, starts to make more sense when you recognise that that there is (or should be) a substantial pool of borrower equity taking the brunt of the initial decline in the value of collateral. The diversity of borrowers is also an important factor in reducing the credit risk of the exposures (though not necessarily the systemic risk of an overall meltdown in the economy). Where that is not the case (and hence the renewed focus on credit origination standards and macro prudential policy in general), then low risk weights are not justified.

I recognise that this argument (incorporating the value of the borrower’s equity) does not work for traded assets where the mark to market change in the value of the asset flows directly to the bank’s equity. It does however work for the kinds of assets on bank balance sheets that typically have very low risk weights (i.e. the primary concern of the leverage ratio advocates). It also does not preclude erring on the side of caution when calculating risk weights so long as the model respects the relative riskiness of the various assets impacting the value of equity.

How much also depends on the quality of risk management (and supervision)

The discussion of how much capital a bank requires should also recognise the distinction between how much a well managed bank needs and how much a poorly managed bank needs. In a sense, the authors are proposing that all banks, good and bad, should be made to hold the capital required by bad banks. Their focus on highlighting the risks of banking obscures the fact that prudent banking mitigates the downside and that well managed banks are not necessarily consigned to the extremes of risk the authors present as the norm of banking.

While not expressed in exactly that way, the distinction I am drawing is implicit in Basel III’s Total Loss Absorbing Capital (TLAC) requirements now being put in place. TLAC adds a substantial layer of additional loss absorption on top of already substantially strengthened common equity requirements. The base layer of capital can be thought of as what is required for a well managed, well supervised bank with a sound balance sheet and business model. APRA’s “Unquestionably Strong” benchmark for CET1 is a practical example of what this requirement looks like. The problem of course is that all banks argue they are good banks but the risk remains that they are in fact bad banks and we usually don’t find out the difference until it is too late. The higher TLAC requirement provides for this contingency.

What should count as capital?

I looked at this question in a recent post on the RBNZ’s proposal that virtually all of their TLAC requirement should be comprised of common equity. Admati and Hellwig side with the RBNZ but I believe that a mix of common equity and bail-in capital (along the lines proposed by APRA) is the better solution.

Read my earlier post for the long version, but the essence of my argument is that bail-in capital introduces a better discipline over bank management risk appetite than does holding more common equity. Calibrating common equity requirements to very high standards should always be the foundation of a bank capital structure. Capital buffers in particular should be calibrated to withstand very severe external shocks and to be resilient against some slippage in risk management.

The argument that shareholders’ need to have more “skin in the game” is very valid where the company is undercapitalised. Bail-in capital is not a substitute for getting the basics right. A bank that holds too little common equity, calibrated to an idealised view of both its own capabilities and of the capacity of the external environment to surprise the modellers, will likely find itself suppressing information that does not fit the model. Loss aversion then kicks in and management start taking more risk to win back that which was lost, just as Admati and Hellwig argue.

However, once you have achieved a position that is unquestionably strong, holding more common equity does not necessarily enhance risk management discipline. My experience in banking is that it may in fact be more likely to breed an undesirable sense of complacency or even to create pressure to improve returns. I know that the later is not a a winning strategy in the long run but in the short run the market frequently does not care.

What is the minimum return an equity investor requires?

One of the problems I find with a simplistic application of Modigliani & Miller’s (M&M) capital irrelevancy argument is that it does not seem to consider if there is a minimum threshold return for an equity investment below which the investment is no longer sufficiently attractive to investors who are being asked to take first loss positions in a company; i.e. where is the line between debt and equity where a return is simply not high enough to be attractive to equity investors?

Reframing the question in this way suggests that the debate between the authors and the bankers may be more about whether risk based capital adequacy models (including stress testing) can be trusted than it is about the limitations of M&M in the real world.

Summary

The author’s solution to prudential supervision of banks is a shock and awe approach to capital that seeks to make the risk of insolvency de minimus for good banks and bad. I have done my best to be open to their arguments and indeed do agree with a number of them. My primary concern with the path they advocate is that I do not believe the extra “skin in the game” generates the risk management benefits they claim.

I see more potential in pursuing a capital structure based on

  • a level of common equity that is robustly calibrated to the needs of a well managed (and well supervised) bank
  • incorporating a well designed counter cyclical capital buffer,
  • supplemented with another robust layer of bail-in capital that imposes real costs (and accountability) on the shareholders and management of banks for whom this level of common equity proves insufficent.

The authors argue that the authorities would never use these bail-in powers for fear of further destabilising funding markets. This is a valid area of debate but I believe they conflate the risks of imposing losses on bank depositors with the kinds of risks that professional bond investors have traditionally absorbed over many centuries of banking. The golden era in which the TBTF factor shielded bank bondholders from this risk is coming to the end but this broader investment class of bond holders has dealt with defaults by all kinds of borrowers. I am not sure why banks would be special in this regard if countries can default. The key issue is that the investors enter into the contract with the knowledge that they are at risk and are being paid a risk premium commensurate with the downside (which may not be that large if investors judge the banks to be well managed).

This is a complex topic so please let me know if I have missed something fundamental or have otherwise mis-represented Admati and Hellwig’s thesis. In the interim, I remain mostly unconvinced …

Tony

  1. It is worth noting that NZ has adopted a different path with respect to deposit protection, rejecting both deposit preference and deposit insurance. They also have a unique policy tool (Open Bank Resolution) that allows the RBNZ to impose losses on deposits as part of the resolution process. They are reviewing the case for deposit insurance and I believe should also reconsider deposit preference.

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.