The financial cycle and macroeconomics: What have we learnt? BIS Working Paper

Claudio Borio at the BIS wrote an interesting paper exploring the “financial cycle”. This post seeks to summarise the key points of the paper and draw out some implications for bank stress testing (the original paper can be found here).  The paper was published in December 2012, so its discussion of the implications for macroeconomic modelling may be dated but I believe it continues to have some useful insights for the challenges banks face in dealing with adverse economic conditions and the boundary between risk and uncertainty.

Key observations Borio makes regarding the Financial Cycle

The concept of a “business cycle”, in the sense of there being a regular occurrence of peaks and troughs in business activity, is widely known but the concept of a “financial cycle” is a distinct variation on this theme that is possibly less well understood. Borio states that there is no consensus definition but he uses the term to

“denote self-reinforcing interactions between perceptions of value and risk, attitudes towards risk and financing constraints, which translate into booms followed by busts. These interactions can amplify economic fluctuations and possibly lead to serious financial distress and economic disruption”.

This definition is closely related to the concept of “procyclicality” in the financial system and should not be confused with a generic description of cycles in economic activity and asset prices. Borio does not use these words but I have seen the term “balance sheet recession” employed to describe much the same phenomenon as Borio’s financial cycle.

Borio identifies five features that describe the Financial Cycle

  1. It is best captured by the joint behaviour of credit and property prices – these variables tend to closely co-vary, especially at low frequencies, reflecting the importance of credit in the financing of construction and the purchase of property.
  2. It is much longer, and has a much larger amplitude, than the traditional business cycle – the business cycle involves frequencies from 1 to 8 years whereas the average length of the financial cycle is longer; Borio cites a cycle length of 16 years in a study of seven industrialised economies and I have seen other studies indicating a longer cycle (with more severe impacts).
  3. It is closely associated with systemic banking crises which tend to occur close to its peak.
  4. It permits the identification of the risks of future financial crises in real time and with a good lead – Borio states that the most promising leading indicators of financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms.
  5. And it is highly dependent of the financial, monetary and real-economy policy regimes in place (e.g. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy).

Macro economic modelling

Borio also argues that the conventional models used to analyse the economy are deficient because they do not capture the dynamics of the financial cycle. These extracts capture the main points of his critique:

“The notion… of financial booms followed by busts, actually predates the much more common and influential one of the business cycle …. But for most of the postwar period it fell out of favour. It featured, more or less prominently, only in the accounts of economists outside the mainstream (eg, Minsky (1982) and Kindleberger (2000)). Indeed, financial factors in general progressively disappeared from macroeconomists’ radar screen. Finance came to be seen effectively as a veil – a factor that, as a first approximation, could be ignored when seeking to understand business fluctuations … And when included at all, it would at most enhance the persistence of the impact of economic shocks that buffet the economy, delaying slightly its natural return to the steady state …”

“Economists are now trying hard to incorporate financial factors into standard macroeconomic models. However, the prevailing, in fact almost exclusive, strategy is a conservative one. It is to graft additional so-called financial “frictions” on otherwise fully well behaved equilibrium macroeconomic models, built on real-business-cycle foundations and augmented with nominal rigidities. The approach is firmly anchored in the New Keynesian Dynamic Stochastic General Equilibrium (DSGE) paradigm.”

“The purpose of this essay is to summarise what we think we have learnt about the financial cycle over the last ten years or so in order to identify the most promising way forward…. The main thesis is that …it is simply not possible to understand business fluctuations and their policy challenges without understanding the financial cycle”

There is an interesting discussion of the public policy (i.e. prudential, fiscal, monetary) associated with recognising the role of the financial cycle but I will focus on what implications this may have for bank management in general and stress testing in particular.

Insights and questions we can derive from the paper

The observation that financial crises are based on simultaneous positive deviations of the ratio of private sector credit-to-GDP and asset prices, especially property prices, from historical norms covers much the same ground as the Basel Committee’s Countercyclical Capital Buffer (CCyB) and is something banks would already monitor as part of the ICAAP. The interesting question the paper poses for me is the extent to which stress testing (and ICAAP) should focus on a “financial cycle” style disruption as opposed to a business cycle event. Even more interesting is the question of whether the higher severity of the financial cycle is simply an exogenous random variable or an endogenous factor that can be attributed to excessive credit growth. 

I think this matters because it has implications for how banks calibrate their overall risk appetite. The severity of the downturns employed in stress testing has in my experience gradually increased over successive iterations. My recollection is that this has partly been a response to prudential stress tests which were more severe in some respects than might have been determined internally. In the absence of any objective absolute measure of what was severe, it probably made sense to turn up the dial on severity in places to align as far as possible the internal benchmark scenarios with prudential benchmarks such as the “Common Scenario” APRA employs.

At the risk of a gross over simplification, I think that banks started the stress testing process looking at both moderate downturns (e.g. 7-10 year frequency and relatively short duration) and severe recessions (say a 25 year cycle though still relatively short duration downturn). Bank supervisors  in contrast have tended to focus more on severe recession and financial cycle style severity scenarios with more extended durations. Banks’s have progressively shifted their attention to scenarios that are more closely aligned to the severe recession assumed by supervisors in part because moderate recessions tend to be fairly manageable from a capital management perspective.

Why does the distinction between the business cycle and the financial cycle matter?

Business cycle fluctuations (in stress testing terms a “moderate recession”) are arguably an inherent feature of the economy that occur largely independently of the business strategy and risk appetite choices that banks make. However, Borio’s analysis suggests that the decisions that banks make (in particular the rate of growth in credit relative to growth in GDP and the extent to which the extension of bank credit contributes to inflated asset values) do contribute to the risk (i.e. probability, severity and duration) of a severe financial cycle style recession. 

Borio’s analysis also offers a way of thinking about the nature of the recovery from a recession. A moderate business cycle style recession is typically assumed to be short with a relatively quick recovery whereas financial cycle style recessions typically persist for some time. The more drawn out recovery from a financial cycle style recession can be explained by the need for borrowers to deleverage and repair their balance sheets as part of the process of addressing the structural imbalances that caused the downturn.

If the observations above are true, then they suggest a few things to consider:

  • should banks explore a more dynamic approach to risk appetite limits that incorporated the metrics identified by Borio (and also used in the calibration of the CCyB) so that the level of risk they are willing to take adjusts for where they believe they are in the state of the cycle (and which kind of cycle we are in)
  • how should banks think about these more severe financial cycle losses? Their measure of Expected Loss should clearly incorporate the losses expected from business cycle style moderate recessions occurring once every 7-10 years but it is less clear that the kinds of more severe and drawn out losses expected under a Severe Recession or Financial Cycle downturn should be part of Expected Loss.

A more dynamic approach to risk appetite get us into some interesting game theory  puzzles because a decision by one bank to pull back on risk appetite potentially allows competitors to benefit by writing more business and potentially doubly benefiting to the extent that the decision to pull back makes it safer for competitors to write the business without fear of a severe recession (in technical economist speak we have a “collective action” problem). This was similar to the problem APRA faced when it decided to impose “speed limits” on certain types of lending in 2017. The Royal Commission was not especially sympathetic to the strategic bind banks face but I suspect that APRA understand the problem.

How do shareholders think about these business and financial cycle losses? Some investors will adopt a “risk on-risk off” approach in which they attempt to predict the downturn and trade in and out based on that view, other “buy and hold” investors (especially retail) may be unable or unwilling to adopt a trading approach.

The dependence of the financial cycle on the fiscal and monetary policy regimes in place and changes in the real-economy also has potential implications for how banks think about the risk of adverse scenarios playing out. Many of the factors that Borio argues have contributed to the financial cycle (i.e. financial liberalisation under Basel II, monetary policy focussed primarily on inflation targeting and globalisation in the real economy) are reversing (regulation of banks is much more restrictive, monetary policy appears to have recognised the limitations of a narrow inflation target focus and the pace of globalisation appears to be slowing in response to a growing concern that its benefits are not shared equitably). I am not sure exactly what these changes mean other than to recognise that they should in principle have some impact. At a minimum it seems that the pace of credit expansion might be slower in the coming decades than it has in the past 30 years.

All in all, I find myself regularly revisiting this paper, referring to it or employing the distinction between the business and financial cycle. I would recommend it to anyone interested in bank capital management. 

The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

Swiss money experiment

Last month I posted a review of Mervyn King’s book “The end of Alchemy”. One of the central ideas in King’s book was that all deposits must be backed 100% by liquid, safe assets. It appears that the Swiss are being asked to vote on a proposal labeled “Sovereign Money Initiative” that may not be exactly the same as King’s idea but comes from the same school of money philosophy.

It is not clear that there is any popular support for the proposal but it would be a fascinating money experiment if it did get support. Thanks to Brian Reid for flagging this one to me.

Tony

 

 

Looking under the hood – The IRB formula

This post is irredeemably technical so stop here if that is not your interest. If you need to understand some of the mechanics of the formula used to calculate credit risk weighted assets under the advanced Internal Ratings Based (IRB) approach then the BCBS published a paper in 2005 which offers an explanation:

  • describing the economic foundations
  • as well as the underlying mathematical model and its input parameters.

While a lot has changed as a result of Basel III, the models underlying the calculation of Internal Rating Based Capital (IRB) requirements are still based on the core principles agreed under Basel II that are explained in this BCBS paper.

The notes in the linked page below mostly summarise the July 2005 paper with some emphasis (bolded text) and comments (in italics) that I have added. The paper is a bit technical but worth reading if you want to understand the original thinking behind the Basel II risk weights for credit risk.

I initially found the paper useful for revisiting the foundation assumptions of the IRB framework as background to considering the regulatory treatment of Expected Loss as banks transition to IFRS9. The background on how the RW was initially intended to cover both Expected and Unexpected Loss, but was revised such that capital was only required to cover Unexpected Loss, is especially useful when considering the interaction of loan loss provisioning with capital requirements.

Reading the BCBS paper has also been useful for thinking through a range of related issues including:

  • The rationale for, and impact of, prudential conservatism in setting the risk parameters used in the IRB formula
  • The cyclicality of a risk sensitive capital requirement (and potential for pro cyclicality) and what might be done to mitigate the risk of pro-cyclical impacts on the economy

If you have read this far then my summary of the BCBS paper and my comments /observations can be found here (and thank you).

I am not a credit risk model expert, so the summary of the paper and my comments must be read with that in mind. I did this to help me think through some of the issues with bank capital adequacy. Hopefully others will find the notes useful. If you see something wrong or something you disagree with then let me know.

“The Great Divide” by Andrew Haldane

This speech by Andrew Haldane (Chief Economist at the Bank of England) was given in 2016 but is sill worth reading for anyone interested in the question of what role banks play in society and why their reputation is not what it once was. Some of my long term correspondents will be familiar with the paper and may have seen an earlier draft of this post.

“The Great Divide” refers to a gap between how banks perceive themselves and how they are perceived by the community. Haldane references a survey the BOE conducted in which the most common word used by banks to describe themselves was “regulated” while “corrupt” was the community choice closely followed by “manipulated”, “self-serving”, “destructive” and “greedy”. There is an interesting “word cloud” chart in the paper representing this gap in perception.

While the focus is on banks, Haldane makes the point that the gap in perceptions reflects a broader tension between the “elites” and the common people. He does not make this explicit connection but it seemed to me that the “great divide” he was referencing could also be argued to be manifesting itself in the increasing support for populist political figures purporting to represent the interests of the common people against career politicians. This broader “great divide” idea seemed to me to offer a useful framework for thinking about the challenges the banking industry is facing in rebuilding trust.

Haldane uses this “great divide” as a reference for discussing

  • The crucial role finance plays in society
  • The progress made so far in restoring trust in finance
  • What more needs to be done

The crucial role finance plays in society

Haldane argues that closing the trust deficit between banks and society matters for two reasons

  • because a well functioning financial system is an essential foundation for a growing and well functioning economy – to quote Haldane “that is not an ideological assertion from the financial elite; it is an empirical fact”
  • but also because the downside of a poorly functioning financial system is so large

Haldane uses the GFC to illustrate the downside in terms of the destruction of the value of financial capital and physical capital but he introduces a third form of capital, “social capital” that he argues may matter every bit as much to the wealth and well being of society. He defines social capital as the “relationships, trust and co-operation forged between different groups of people over time. It is the sociological glue that binds diverse societies into a cohesive whole”. The concept of “trust” is at the heart of Haldane’s definition of social capital.

Haldane cites evidence that trust plays an important role at both the micro and macro level in value creation and growth and concludes that “… a lack of trust jeopardises one of finance’s key societal functions – higher growth”.

In discussing these trends, Haldane distinguishes “personalised trust” and “generalised trust“. The former refers to mutual co-operation built up through repeated personal interactions (Haldane cites example like visits to the doctor or hairdresser) while the latter is attached to an identifiable but anonymous group (Haldane cites trust in the rule of law, or government or Father Christmas).

He uses this distinction to explore why banks have lost the trust of the community;

He notes that banking was for most of its history a relationship based business. The business model was not perfect but it did deliver repeated interactions with customers that imbued banking with personalised trust. At the same time its “mystique” (Haldane’s term) meant that banking maintained a high degree of generalised trust as well.

He cites the reduction in local branches, a common strategy pre GFC, as one of the changes that delivered lower costs but reduced personal connections thereby contributing to reducing personalised trust. For a while, the banking system could reap the efficiency gains while still relying on generalised trust but the GFC subsequently undermined the generalised trust in the banking system. This generalised trust has been further eroded by the continued run of banking scandals that convey the sense that banks do not care about their customers.

What can be done to restore trust in finance

He notes the role that higher capital and liquidity have played but that this is not enough in his view. He proposes three paths

  1. Enhanced public education
  2. Creating “Purpose” in banking
  3. Communicating “Purpose” in banking

Regarding public education, there is a telling personal anecdote he offers on his experience with pensions. He describes himself as “moderately financially literate” but follows with “Yet I confess to not being able to make the remotest sense of pensions. Conversations with countless experts and independent financial advisors have confirmed for me only one thing – that they have no clue either”. This may be dismissed as hyperbole but it does highlight that most people will be less financially literate than Haldane and are probably poorly equipped to deal with the financial choices they are required to make in modern society. I am not sure that education is the whole solution.

Regarding “purpose” Haldane’s main point seems to be that there is too much emphasis on shareholder value maximisation and not enough balance. This seems to be an issue that is amplified by the UK Companies Act that requires that directors place shareholder interests as their primary objective. To the best of my knowledge, the Australian law does not have an equivalent explicit requirement to put shareholders first but we do grapple with the same underlying problem. Two of my recent posts (“The World’s Dumbest Idea” and “The Moral Economy” touch on this issue.

Regarding communicating purpose, Haldane cites some interesting evidence that the volume of information provided by companies is working at cross purposes with actual communication with stakeholders. Haldane does not make the explicit link but Pillar 3 clearly increases the volume of information provided by banks. The points raised by Haldane imply (to me at least) that Pillar 3 might actually be getting in the way of communicating clearly with stakeholders.

This is a longish post but I think there is quite a lot of useful content in the speech so I would recommend it.

“Between Debt and the Devil: Money, Credit and Fixing Global Finance” by Adair Turner (2015)

This book is worth reading, if only because it challenges a number of preconceptions that bankers may have about the value of what they do. The book also benefits from the fact that author was the head of the UK Financial Services Authority during the GFC and thus had a unique inside perspective from which to observe what was wrong with the system. Since leaving the FSA, Turner has reflected deeply on the relationship between money, credit and the real economy and argues that, notwithstanding the scale of change flowing from Basel III, more fundamental change is required to avoid a repeat of the cycle of financial crises.

Overview of the book’s main arguments and conclusions

Turner’s core argument is that increasing financial intensity, represented by credit growing faster than nominal GDP, is a recipe for recurring bouts of financial instability.

Turner builds his argument by first considering the conventional wisdom guiding much of bank prudential regulation prior to GFC, which he summarises as follows:

  • Increasing financial activity, innovation and “financial deepening” were beneficial forces to be encouraged
  • More compete and liquid markets were believed to ensure more efficient allocation of capital thereby fostering higher productivity
  • Financial innovations made it easier to provide credit to households and companies thereby enabling more rapid economic growth
  • More sophisticated risk measurement and control meanwhile ensured that the increased complexity of the financial system was not achieved at the expense of stability
  • New systems of originating and distributing credit, rather than holding it on bank balance sheets, were believed to disperse risks into the hands of those best placed to price and manage it

Some elements of Turner’s account of why this conventional wisdom was wrong do not add much to previous analysis of the GFC. He notes, for example, the conflation of the concepts of risk and uncertainty that weakened the risk measurement models the system relied on and concludes that risk based capital requirements should be foregone in favour of a very high leverage ratio requirement. However, in contrast to other commentators who attribute much of the blame to the moral failings of bankers, Turner argues that this is a distraction. While problems with the way that bankers are paid need to be addressed, Turner argues that the fundamental problem is that:

  • modern financial systems left to themselves inevitably create debt in excessive quantities,
  • in particular, the system tends to create debt that does not fund new capital investment but rather the purchase of already existing assets, above all real estate.

Turner argues that the expansion of debt funding the purchase or trading of existing assets drives financial booms and busts, while the debt overhang left over by the boom explains why financial recovery from a financial crisis is typically anaemic and protracted. Much of this analysis seems to be similar to ideas developed by Hyman Minsky while the slow pace of recovery in the aftermath of the GFC reflects a theme that Reinhart and Rogoff have observed in their book titled “This time is different” which analyses financial crises over many centuries.

The answer, Turner argues, is to build a less credit intensive growth model. In pursuing this goal, Turner argues that we also need to understand and respond to the implications of three underlying drivers of increasing credit intensity;

  1. the increasing importance of real estate in modern economies,
  2. increasing inequality, and
  3. global current account imbalances.

Turner covers a lot of ground, and I do not necessarily agree with everything in his book, but I do believe his analysis of what is wrong with the system is worth reading.

Let me start with an argument I do not find compelling; i.e. that risk based capital requirements are unreliable because they are based on a fundamental misunderstanding of the difference between risk (which can be measured) and uncertainty (which cannot):

  • Distinguishing between risk and uncertainty is clearly a fundamental part of understanding risk and Turner is not alone in emphasising its importance
  • I believe that means that we should treat risk based capital requirements with a healthy degree of scepticism and a clear sense of their limitations but that does not render them entirely unreliable especially when we are using them to understand relative differences in risk and to calibrate capital buffers
  • The obvious problem with non-risk based capital requirements is that they create incentives for banks to take higher risk that may eventually offset the supposed increase in soundness attached to the higher capital
  • It may be that Turner discounts this concern because he envisages a lower credit growth/intensity economy delivering less overall systemic risk or because he envisages a more active role for the public sector in what kinds of assets banks lend against; i.e. his support for higher capital may stem mostly from the fact that this reduces the capacity of private banks to generate credit growth

While advocating much higher capital, Turner does seem to part company with M&M purists by expressing doubt that equity investors will be willing to accept deleveraged returns. His reasoning is that returns to equity investments need a certain threshold return to be “equity like” while massively deleveraged ROE still contains downside risks that are unacceptable to debt investors.

Turning to the arguments which I think raise very valid concerns and deserve serious attention.

Notwithstanding my skepticism regarding a leverage ratio as the solution, the arguments he makes about the dangers of excessive credit growth resonate very strongly with what I learned during my banking career. Turner is particularly focussed on the downsides of applying excessive debt to the financing of existing assets, real estate in particular. The argument seems to be similar to (if not based on) the work of Hyman Minsky.

Turner’s description of the amount of money that banks can create as being “infinitely elastic” seems an overstatement to me (especially in the Australian context with the Net Stable Funding Ratio (NSFR) weighing on the capacity to grow the balance sheet) but the general point he is making about the way that credit fuelled demand for a relatively inelastic supply of desirable residential property tends to result in inflated property values with no real social value rings true.

What banks can do about this remains an open question given that resolving the problem with inelastic supply of property is outside their direct control but it is obviously important to understand the dynamics of the market underpinning their largest asset class and it may help them engage more constructively with public policy debates that seek to address the problem.

Turner’s analysis of the downsides of easy monetary policy (the standard response to economic instability) also rings true. He identifies the fact that lower interest rates tend to result in inflated asset values (residential property in particular given its perceived value as a safe asset) which do not address the fundamental problem of over-indebtedness and may serve to increase economic inequality. His discussion of the impact of monetary policy and easy credit on economic inequality is also interesting. The banks providing the credit in the easy money environment may not necessarily be taking undue risk and prudential supervisors have tools to ensure sound lending standards are maintained if they do believe there is a problem with asset quality. What may happen however is that the wealthier segments of society benefit the most under easy money because they have the surplus cash flow to buy property at inflated values while first homebuyers become squeezed out of the market. Again their capacity to address the problem may be limited but Turner’s analysis prompted me to reflect on what increasing economic inequality might mean for bank business models.

In addition to much higher bank capital requirements, Turner’s specific recommendations for moving towards a less credit intensive economy include:

  • Government policies related to urban development and the taxation of real estate
  • Changing tax regimes to reduce the current bias in favour of debt over equity financing (note that Australia is one of the few countries with a dividend imputation system that does reduce the bias to debt over equity)
  • Broader macro prudential powers for central banks, including the power to impose much larger countercyclical capital requirements
  • Tough constraints on the ability of the shadow banking system to create credit and money equivalents
  • Using public policy to produce different allocations of capital than would result from purely market based decisions; in particular, deliberately leaning against the market signal based bias towards real estate and instead favouring other “potentially more socially valuable forms of credit allocation”
  • Recognising that the traditional easy monetary policy response to an economic downturn (or ultra-easy in the case of a financial crisis such as the GFC) is better than doing nothing but comes at a cost of reigniting the growth in private credit that generated the initial problem, creating incentives for risky financial engineering and exacerbating economic inequality via inflating asset prices.

For those who want to dig deeper, I have gone into a bit more detail here on what Turner has to say about the following topics:

  • The way in which inefficient and irrational markets leave the financial system prone to booms and busts
  • The dangers of debt contracts sets out how certain features of these contracts increase the risk of instability and hamper the recovery
  • Too much of the wrong sort of debt describes features of the real estate market that make it different from other asset classes
  • Liberalisation, innovation and the credit cycle on steroids recaps on the philosophy that drove the deregulation of financial markets and what Turner believes to be the fundamental flaws with that approach. In particular his conclusion that the amount of credit created and its allocation is “… too important to be left to bankers…”
  • Private credit and money creation offers an outline of how bank deposits evolved to play an increasing role (the key point being that it was a process of evolution rather than overt public policy design choices)
  • Credit financed speculation discusses the ways in which credit in modern economies tends to be used to finance the purchase of existing assets, in particular real estate, and the issues that flow from this.
  • Inequality, credit and more inequality sets out some ways in which the extension of credit can contribute to increasing economic inequality
  • Capital requirements sets out why Turner believes capital requirements should be significantly increased and why capital requirements (i.e. risk weights) for some asset classes (e.g. real estate) should be be calibrated to reflect the social risk of the activity and not just private risks captured by bank risk models
  • Turner defence against the argument that his proposals are anti-markets and anti-growth.

“The World’s Dumbest Idea” by James Montier of GMO.

Anyone interested in the question of shareholder value will I think find this paper by James Montier interesting.

The focus of the paper is to explore problems with elevating Shareholder Value to be the primary objective of a firm. Many companies are trying to achieve a more balanced approach but the paper is still useful background given that some investors appear to believe that shareholder value maximisation is the only valid objective a company should pursue. The paper also touches on the question of how increasing inequality is impacting the environment in which we operate.

While conceding that the right incentives can prompt better performance, JM argues that there is a point where increasing the size of the reward actually leads to worse performance;

“From the collected evidence on the psychology of incentives, it appears that when incentives get too high people tend to obsess about them directly, rather than on the task in hand that leads to the payout. Effectively, high incentives divert attention away from where it should be”

The following extracts will give you a sense of the key points and whether you want to read the paper itself.

  • “Let’s now turn to the broader implications and damage done by the single-minded focus on SVM. In many ways the essence of the economic backdrop we find ourselves facing today can be characterized by three stylized facts: 1) declining and low rates of business investment; 2) rising inequality; and 3) a low labour share of GDP (evidenced by Exhibits 7 through 9).” — Page 7 —
  • “This preference for low investment tragically “makes sense” given the “alignment” of executives and shareholders. We should expect SVM to lead to increased payouts as both the shareholders have increased power (inherent within SVM) and the managers will acquiesce as they are paid in a similar fashion. As Lazonick and Sullivan note, this led to a switch in modus operandi from “retain and reinvest” during the era of managerialism to “downsize and distribute” under SVM.” — Page 9 —
  • “This diversion of cash flows to shareholders has played a role in reducing investment. A little known fact is that almost all investment carried out by firms is financed by internal sources (i.e., retained earnings). Exhibit 13 shows the breakdown of the financing of gross investment by source in five-year blocks since the 1960s. The dominance of internal financing is clear to see (a fact first noted by Corbett and Jenkinson in 1997”— Page 10 —
  • “The obsession with returning cash to shareholders under the rubric of SVM has led to a squeeze on investment (and hence lower growth), and a potentially dangerous leveraging of the corporate sector” — Page 11 —
  • “The problem with this (apart from being an affront to any sense of fairness) is that the 90% have a much higher propensity to consume than the top 10%. Thus as income (and wealth) is concentrated in the hands of fewer and fewer, growth is likely to slow significantly. A new study by Saez and Zucman (2014) … shows that 90% have a savings rate of effectively 0%, whilst the top 1% have a savings rate of 40%…. ultimately creating a fallacy of composition where they are undermining demand for their own products by destroying income).” —Page 13 —
  • “Only by focusing on being a good business are you likely to end up delivering decent returns to shareholders. Focusing on the latter as an objective can easily undermine the former. Concentrate on the former, and the latter will take care of itself.” — Page 14 —
  • “… management guru Peter Drucker was right back in 1973 when he suggested “The only valid purpose of a firm is to create a customer.”” — Page 14 —

People want money

This post draws on a FT article titled “People want money” which led me to an interesting paper by Gary Gorton and George Pennacchi titled “Financial Intermediaries and Liquidity Creation”.  I took the following points away from the Gorton/Pennacchi paper:

  • The modern financial markets based economy relies on “money” to facilitate the bulk of its economic activity and bank deposits are the dominant form of money
  • There is however a continuous search for ways to expand the domain of what matches the liquidity of “money” while offering a better return
  • History has seen a variety of instruments and commodities operate as money but a critical issue is whether they retain their “moneyness” during adverse economic conditions (I think this is something that the crypto currency advocates don’t seem to fully grasp)
  • Gorton/Pennacchi argue that the liquidity of an instrument and hence its capacity to be accepted and used as money depends on the ability of uninformed agents to trade it without fear of loss; i.e. the extent to which the value of the instrument is insulated from any adverse information about the counterparty – This I think is their big idea
  • The role of a bank has traditionally been characterised as one of credit intermediation between savers and borrowers but Gorton/Pennacchi argue that the really critical role of banks is to provide a liquid asset in the form of bank deposits that serves as a form of money
  • Note that other functions offered by banks can be replicated by non-banks (e.g. non-banks are increasingly providing payment functions for customers and offering loans)  but the capacity to issue liabilities that serve as money is unique to banks
  • The challenge is that banks tend to hold risky assets and to be opaque which undermines the liquidity of bank deposits/money (as an aside, Gorton/Pennacchi offer some interesting historical context in which opacity was useful because people trusted banks and the opacity helped shield them from any information which might undermine this trust)
  • There are a variety of ways to make bank deposits liquid in the sense that Gorton/Pennacchi define it (i.e. insensitive to adverse information about the bank) but they argue for solutions where depositors have a sufficiently deep and senior claim on the assets of the bank that any volatility in their value is of no concern to them
  • This of course is what deposit insurance and giving deposits a preferred claim in the bank loss hierarchy does (note that the insured deposit a preferred claim on a bank’s assets also means the government can underwrite deposit insurance with very little risk of loss)
  • A lot of the regulatory change we have seen to date (more equity, less short term funding) contribute to that outcome without necessarily being expressed in terms of improving the liquidity of bank deposits in the way Gorton/Pennacchi frame the desired outcome

A lot of the above is not necessarily new but I do see some interesting connections with the role of banks in the money creation process and how this influences the debate about what is the optimum capital structure for a bank

  • It has been argued that more (and more) equity is a costless solution to the problem of how much is enough because the cost of equity will decline as the percentage of equity in the balance sheet increases
  • This conclusion depends in turn on the Modigliani and Miller (M&M) thesis that the value of a firm is independent of its financing structure
  • The Money Creation analysis however shows that banks are in fact unique (amongst private companies) in that one of the things they produce is money (or bank deposits to be more precise) – Gorton/Pennacchi explicitly call this out as a factor that means that M&M does not apply to banks in the simplistic way proponents of very high capital assert (most other critiques of higher bank capital just focus on the general limitations of M&M)
  • If you accept Gorton/Pennacchi’s argument that bank deposits need to be risk free in the minds of the users if they are to serve as money (the argument makes sense to me) then it follows that the cost of deposits does not change incrementally with changes in the financing structure in the way that M&M assume
  • In practice, bank deposits are either assumed to be risk free or they lose that risk free status – the risk trade-off is binary – one or the other, but not a smooth continuum assumed by M&M
  • That implies that all the real risk in a bank balance sheet has to reside in other parts of the loss hierarchy (i.e. equity, other loss absorbing capital and senior instruments)
  • And this will be even more so under Basel III because the government is developing the capacity to impose losses on all these stakeholders without having to resort to a formal bankruptcy and liquidation process (i.e. via bail-in and TLAC)
  •  Critics of bail-in argue that you can’t impose losses on liabilities but here I think they are conflating what you can’t do to depositors (where I would very much agree) with what can and does happen to bondholders relatively frequently
  • Bondholders have faced losses of principal lending to a range of counterparties (including sovereigns) so I don’t see why banks should be special in this regard – what matters is that bondholders understand the risk and price it appropriately (including not lending as much as they might otherwise have done)
  • I would also argue that imposing the risk of bail in onto bondholders is likely to be a much more effective risk discipline than requiring banks to hold arbitrarily large equity holdings that mean they struggle to earn an adequate equity like return

Gorton and Pennacchi’s paper did not explicitly raise this point but I also see an interesting connection with the Basel III Net Stable Funding Ratio (NSFR) requirement that does not get much attention;

  • The NSFR places great value on having a high level of depositor funding but, the greater the share of deposits in the liability stack, the more exposed those deposits are to any volatility in the value of the bank’s assets
  • So holding too many deposits might in fact be counterproductive and less resilient than an alternative structure in which there is slightly more long term wholesale funding and less retail deposits
  • This line of analysis also calls into question the logic underpinning the Open Bank Resolution regime in NZ where deposits can be bailed in pro rata with senior unsecured liabilities
  • The NZ regime allows some de minimis value of deposits to be excluded from bail in but there is no depositor preference such as Australia has under the Banking Act
  • The RBNZ seems to assume that applying market discipline to deposits is desirable on Moral Hazard grounds but Gorton/Pennacchi’s thesis seems to me to imply the exact opposite

Tell me what I am missing …

The Countercyclical Capital Buffer

This post uses a recent BCBS working paper as a stepping off point for a broader examination of how the countercyclical capital buffer (CCyB) can help make the banking system more resilient.

This post uses a recent BCBS working paper as a stepping off point for a broader examination of how the countercyclical capital buffer (CCyB) can help make the banking system more resilient. The BCBS paper is titled “Towards a sectoral application of the countercyclical capital buffer: A literature review – March 2018” (BCBS Review) and its stated aim is to draw relevant insights from the existing literature and use these to shed light on whether a sectoral application of the CCyB would be a useful extension of the existing Basel III framework under which the CCyB is applied at an aggregate country level credit measure. The views expressed in Working Papers like this one are those of their authors and do not represent the official views of the Basel Committee but they do still offer some useful insights into what prudential supervisors are thinking about.

Key points

  1. I very much agree with the observation in the BCBS Review that the standard form of the CCyB is a blunt instrument by virtue of being tied to an aggregate measure of credit growth
  2. And that a sectoral application of the CCyB (operating in conjunction with other sector focussed macro prudential tools) would be an improvement
  3. But the CCyB strategy that has been developed by the Bank of England looks to be a much better path to pursue
  4. Firstly, because it directly addresses the problem of failing to detect/predict when the CCyB should be deployed and secondly because I believe that it results in a much more “usable” capital buffer
  5. The CCyB would be 1% if APRA adopted the Bank of England strategy (the CCyB required by APRA is currently 0%) but adopting this strategy does not necessarily require Australian banks to hold more capital at this stage of the financial cycle
  6. One option would be to align one or more elements of APRA’s approach with the internationally harmonised measure of capital adequacy and to “reinvest” the increased capital in a 1% CCyB.

First a recap on the Countercyclical Capital Buffer (aka CCyB).

The CCyB became part of the international macro prudential toolkit in 2016 and is intended to ensure that, under adverse conditions, the banking sector in aggregate has sufficient surplus capital on hand required to maintain the flow of credit in the economy without compromising its compliance with prudential requirements.

A key feature in the original BCBS design specification is that the buffer is intended to be deployed in response to high levels of aggregate credit growth (i.e high relative to the sustainable long term trend rates) which their research has identified as an indicator of heightened systemic risk. That does not preclude bank supervisors from deploying the buffer at other times as they see fit, but responding to excess credit growth has been a core part of the rationale underpinning its development.

The BCBS Review

The BCBS Review notes that the CCyB works in theory but concedes there is, as yet, virtually no empirical evidence that it will work in practice. This is not surprising given that it has only been in place for a very short period of time but still important to remember. The BCBS Review also repeatedly emphasises the point that the CCyB may help to mitigate the credit cycle but that is a potential side benefit, not the main objective. Its primary objective is to ensure that banks have sufficient surplus capital to be able to continue lending during adverse economic conditions where losses will be consuming capital.

The Review argues that the CCyB is a useful addition to the supervisor’s tool kit but is a blunt instrument that impacts all sectors of the economy indiscriminately rather than just targeting the sectors which are the source of systemic concern. It concludes that applying the CCyB at a sectoral level might be more effective for three reasons

  • more direct impact on the area of concern,
  • stronger signalling power, and
  • smaller effects on the wider economy than the Basel III CCyB.

The Review also discusses the potential to combine a sectoral CCyB with other macro prudential instruments; in particular the capacity for the two approaches to complement each other;

Quote “Generally, macroprudential instruments that operate through different channels are likely to complement each other. The literature reviewed indicates that a sectoral CCyB could indeed be a useful complement to alternative sectoral macroprudential measures, including borrower-based measures such as LTV, LTI and D(S)TI limits. To the extent that a sectoral CCyB is more effective in increasing banks’ resilience and borrower-based measures are more successful in leaning against the sectoral credit cycle, both objectives could be attained more effectively and efficiently by combining the two types of instruments. Furthermore, there is some evidence that suggests that a sectoral CCyB could have important signalling effects and may therefore act as a substitute for borrower-based measures.”

A Sectoral CCyB makes sense

Notwithstanding repeated emphasis that the main point of the CCyB is to ensure banks can and will continue to support credit growth under adverse conditions, the Review notes that there is not much, if any, hard empirical evidence on how effective a release of the CCyB might be in achieving this. The policy instrument’s place in the macro prudential tool kit seems to depend on the intuition that it should help, backed by some modelling that demonstrates how it would work and a pinch of hope. The details of the modelling are not covered in the Review but I am guessing it adopts a “homo economicus” approach in which the agents act rationally. The relatively thin conceptual foundations underpinning the BCBS version of the CCyB are worth keeping in mind.

The idea of applying the CCyB at a sectoral level seems to make sense. The more targeted approach advocated in the Review should in theory allow regulators to respond to sectoral areas of concern more quickly and precisely than would be the case when the activation trigger is tied to aggregate credit growth. That said, I think the narrow focus of the Review (i.e. should we substitute a sectoral CCyB for the current approach) means that it misses the broader question of how the CCyB might be improved. One alternative approach that I believe has a lot of promise is the CCyB strategy adopted by the Bank of England’s Financial Policy Committee (FPC).

The Bank of England Approach to the CCyB (is better)

The FPC published a policy statement in April 2016 explaining that its approach to setting the countercyclical capital buffer is based on five core principles. Many of these are pretty much the same as the standard BCBS policy rationale discussed above but the distinguishing feature is that it “… intends to set the CCyB above zero before the level of risk becomes elevated. In particular, it expects to set a CCyB in the region of 1% when risks are judged to be neither subdued nor elevated.”

This contrasts with the generic CCyB, as originally designed by the BCBS, which sets the default position of the buffer at 0% and only increases it in response to evidence that aggregate credit growth is excessive. This might seem like a small point but I think it is a material improvement on the BCBS’s original concept for two reasons.

Firstly, it directly addresses the problem of failing to detect/predict when systemic risk in the banking system requires prudential intervention. A lot of progress has been made in dealing with this challenge, not the least of which has been to dispense with the idea that central banks had tamed the business cycle. The financial system however retains its capacity to surprise even its most expert and informed observers so I believe it is better to have the foundations of a usable countercyclical capital buffer in place as soon as possible after the post crisis repair phase is concluded rather than trying to predict when it might be required.

The FPC still monitors a range of core indicators for the CCyB grouped into three categories.

  • The first category includes measures of ‘non-bank balance sheet stretch’, capturing leverage in the broader economy and in the private non-financial (ie household and corporate) sector specifically.
  • The second category includes measures of ‘conditions and terms in markets’, which capture borrowing terms on new lending and investor risk appetite more broadly.
  • The third category includes measures of ‘bank balance sheet stretch’, which capture leverage and maturity/liquidity transformation in the banking system.

However the FPC implicitly accepts that it can’t predict the future so it substitutes a simple, pragmatic and error resilient strategy (put the default CCyB buffer in place ASAP) for the harder problem of trying to predict when it will be needed. This strategy retains the option of increasing the CCyB, is simpler to administer and less prone to error than the BCBS approach. The FPC might still miss the turning point but it has a head start on the problem if it does.

The FPC also integrates its CCyB strategy with its approach to stress testing. Each year the stress tests include a scenario:

“intended to assess the risks to the banking system emanating from the financial cycle – the “annual cyclical scenario”

The severity of this scenario will increase as risks build and decrease after those risks crystallise or abate. The scenario might therefore be most severe during a period of exuberance — for example, when credit and asset prices are growing rapidly and risk premia are compressed. That might well be the point when markets and financial institutions consider risks to be lowest. And severity will be lower when exuberance has corrected — often the time at which markets assess risks to be largest. In leaning against these tendencies, the stress-testing framework will lean against the cyclicality of risk taking: it will be countercyclical.”

The Bank of England’s approach to stress testing the UK banking system – October 2015 (page 5)

The second reason  I favour the FPC strategy is because I believe it is likely to result in a more “usable” buffer once risk crystallizes (not just systemic risk) and losses start to escalate. I must admit I have struggled to clearly articulate why this would be so but I think the answer lies partly in the way that the FPC links the CCyB to a four stage model that can be interpreted as a stylised description of the business cycle. The attraction for me in the FPC’s four stage model is that it offers a coherent narrative that helps all the stakeholders understand what is happening, why it is happening, what will happen next and when it will happen.

The BCBS Review talks about the importance of communication and the FPC strategy offers a good model of how the communication strategy can be anchored to a coherent and intuitive narrative that reflects the essentially cyclical nature of the banking industry. The four stages are summarised below together with some extracts setting out the FPC rationale.

Stage 1: The post-crisis repair phase in which risks are subdued – the FPC would expect to set a CCyB rate of 0%

FPC rationale: “Risks facing the financial system will normally be subdued in a post-crisis repair and recovery phase when the financial system and borrowers are repairing balance sheets. As such, balance sheets are not overextended. Asset and property prices tend to be low relative to assessed equilibrium levels. Credit supply is generally tight and the risk appetite of borrowers and lenders tends to be low. The probability of banks coming under renewed stress is lower than average.”

Stage 2: Risks in the financial system re-emerge but are not elevated – the FPC intends to set a positive CCyB rate in the region of 1% after the economy moves into this phase.

FPC rationale: ‘In this risk environment, borrowers will not tend to be unusually extended or fragile, asset prices are unlikely to show consistent signs of over, or under, valuation, and measures of risk appetite are likely to be in line with historical averages”. As such, it could be argued that no buffer is required but the FPC view is that a pre-emptive strategy is more “robust to the inherent uncertainty associated with measuring risks to financial stability”. It also allows subsequent adjustments to be more graduated than would be possible if the CCyB was zero.

Stage 3: Risks in the financial system become elevated: stressed conditions become more likely – the FPC would expect to increase the CCyB rate beyond the region of 1%. There is no upper bound on the rate that can be set by the FPC.

FPC rationale: “As risks in the financial system become elevated, borrowers are likely to be stretching their ability to repay loans, underwriting standards will generally be lax, and asset prices and risk appetite tend to be high. Often risks are assumed by investors to be low at the very point they are actually high. The distribution of risks to banks’ capital at this stage of the financial cycle might have a ‘fatter tail’ [and] stressed outcomes are more likely.”

Stage 4: Risks in the financial system crystallise – the FPC may cut the CCyB rate, including where appropriate to 0%.

FPC rationale: “Reducing the CCyB rate pre-emptively before losses have crystallised may reduce banks’ perceived need to hoard capital and restrict lending, with consequent negative impacts for the real economy. And if losses have crystallised, reducing the CCyB allows banks to recognise those losses without having to restrict lending to meet capital requirements. This will help to ensure that capital accumulated when risks were building up can be used, thus enhancing the ability of the banking system to continue to support the economy in times of stress.”

The March 2018 meeting of the FPC advised that the CCyB applying to UK exposures would remain unchanged at the 1% default level reflecting its judgement that the UK banking system was operating under Stage 2 conditions.

Calibrating the size of the CCyB

The FPC’s approach to calibrating the size of the CCyB also offers some interesting insights. The FPC’s initial (April 2016) policy statement explained that a “CCyB rate in the region of 1%, combined with other elements of the capital framework, provides UK banks with sufficient capital to withstand a severe stress. Given current balance sheets, the FPC judges that, at this level of the CCyB, banks would have sufficient loss-absorbing capacity to weather a macroeconomic downturn of greater magnitude than those observed on average in post-war recessions in the United Kingdom — although such estimates are inherently uncertain.”

The first point to note is that the FPC has chosen to anchor their 1% default setting to a severity greater than the typical post war UK recession but not necessarily a GFC style event. There is a school of thought that maintains that more capital is always better but the FPC seems to be charting a different course. This is a subtle area in bank capital management but I like the the FPC’s implied defence of subtlety.

What is sometimes lost in the quest for a failure proof banking system is a recognition of the potential for unintended consequence. All other things being equal, more capital makes a bank less at risk of insolvency but all other things are almost never equal in the real world. Banks come under pressure to find ways to offset the ROE dilution associated with more capital. I know that theory says that a bank’s cost of equity should decline as a result of holding more capital so there is no need to offset the dilution but I disagree (see this post for the first in a proposed series where I have started to set out my reasons why). Attempts to offset ROE dilution also have a tendency to result in banks taking more risk in ways that are not immediately obvious. Supervisors can of course intervene to stop this happening but their already difficult job is made harder when banks come under pressure to lift returns. This is not to challenge the “unquestionably strong” benchmark adopted by APRA but simply to note that more is not always better.

Another problem with just adding more capital is that the capital has to be usable in the sense that the capital ratio needs to be able to decline as capital is consumed by elevated losses without the bank coming under pressure to immediately restore the level of capital it is expected to hold. The FPC strategy of setting out how it expects capital ratios to increase or decrease depending on the state of the financial cycle helps create an environment in which this can happen.

Mapping the BOE approach to Australia

APRA has set the CCyB at 0% whereas the BOE approach would suggest a value of at least 1% and possibly more given that APRA has felt the need to step in to cool the market down. It is important to note that transitioning to a FPC style CCyB does not necessarily require that Australian banks need to hold more capital. One option would be to harmonise one or more elements of APRA’s approach to capital measurement (thereby increasing the reported capital ratio) and to “reinvest” the surplus capital in a CCyB. The overall quantum of capital required to be unquestionably strong would not change but the form of the capital would be more usable to the extent that it could temporarily decline and banks had more time to rebuild  the buffer during the recovery phase.

Summing up

A capital adequacy framework that includes a CCyB that is varied in a semi predictable manner over the course of the financial cycle would be far more resilient than the one we currently have that offers less flexibility and is more exposed to the risk of being too late or missing the escalation of systemic risk all together.

Tell me what I am missing …

Are banks a special kind of company (or at least different)?

This is a big topic, and somewhat irredeemably technical, but I have come to believe that there are some unique features of banks that make them quite different from other companies. Notwithstanding the technical challenges, I think it is important to understand these distinguishing features if we are to have a sensible debate about the optimum financing structure for a bank and the kinds of returns that shareholders should expect on the capital they contribute to that structure.

You could be forgiven for thinking that the Australian debate about optimum capital has been resolved by the “unquestionably strong” benchmark that APRA has set and which all of the major banks have committed to meet. However, agreeing what kind of return is acceptable on unquestionably strong capital remains contentious and we have only just begun to consider how the introduction of a Total Loss Absorbing Capital (TLAC) requirement will impact these considerations.

The three distinctive features of banks I want to explore are:

  • The way in which net new lending by banks can create new bank deposits which in turn are treated as a form of money in the financial system (i.e. one of the unique things banks do is create a form of money);
  • The reality that a large bank cannot be allowed to fail in the conventional way (i.e. bankruptcy followed by reorganisation or liquidation) that other companies and even countries can (and frequently do); and
  • The extent to which bank losses seem to follow a power law distribution and what this means for measuring the expected loss of a bank across the credit cycle.

It should be noted at the outset that Anat Admati and Martin Hellwig (who are frequently cited as authorities on the issues of bank capital discussed in this post) disagree with most if not all of the arguments I intend to lay out. So, if they are right, then I am wrong. Consequently, I intend to first lay out my understanding of why they disagree and hopefully address the objections they raise. They have published a number of papers and a book on the topic but I will refer to one titled “The Parade of the Bankers’ New Clothes Continues: 31 Flawed Claims Debunked” as the primary source of the counter arguments that I will be attempting to rebut. They are of course Professors whereas I bring a lowly masters degree and some practical experience to the debate. Each reader will need to decide for themselves which analysis and arguments they find more compelling.

Given the size of the topic and the technical nature of the issues, I also propose to approach this over a series of posts starting with the relationship between bank lending and deposit creation. Subsequent posts will build on this foundation and consider the other distinctive features I have identified before drawing all of the pieces together by exploring some practical implications.

Do banks create “money”? If so, how does that impact the economics of bank funding?

The Bank of England (BoE) released a good paper on the first part of this question titled “Money creation in the modern economy” .  The BoE paper does require some banking knowledge but I think demonstrates reasonably clearly that the majority of bank deposits are created by the act of a bank making a new loan, while the repayment of bank loans conversely reduces the pool of deposits. The related but more important question for the purposes of this discussion is whether you believe that bank deposits are a form of money.

Admati and Hellwig identify the argument that “banks are special because they create money” as Flawed Claim #5 on the grounds that treating deposits as money is an abuse of the word “money”. They are not disputing the fact that monetary economists combine cash with demand deposits in one of the definitions of money. As I understand it, the essence of their argument is that deposits are still a debt of the issuing bank while “real” money does not need to be repaid to anyone.

It is true that deposits are a bank debt and that some deposits are repayable on demand. However, I believe the bigger issues bearing on the economics of bank financing stem from the arguments Admati and Hellwig advance to debunk what they label as Flawed Claim #4 that “The key insights from corporate finance about the economics of funding, including those of Modigliani and Miller, are not relevant for banks because banks are different from other companies“.

Their argument appears to focus on using Modigliani and Miller (“M&M”) as an “analytical approach” in which the cost (contractual or expected) of the various forms of financing are connected by a universal law of risk and reward. Their argument is that this universal law (analogous to the fundamental laws of physics) demands that using more or less equity (relative to debt) must translate to a lower or higher risk of insolvency and that rational debt investors will respond by adjusting the risk premium they demand.

I have no issue with the analytical approach or the premise that funding costs should be related to risk. What happens however when one of the primary forms of debt funding is largely protected from the risk of insolvency? In the case of the major Australian banks, deposits account for over half of a bank’s total funding but are largely isolated from the risk of insolvency by a number of features. One is the Banking Act that confers a preferred claim in favour of Australian depositors over the Australian assets of the bank. The other is government guaranteed deposit insurance coverage capped at $250,000 per person per bank. The rationale for these acts of apparent government generosity is a contentious subject in itself but, for the purposes of this post, my working hypothesis is that the preferred claim and deposit insurance are a consequence of the fact that the community treats bank demand deposits as a form of money.

Consequently, the risk that an Australian depositor will face a loss of principal in the already remote event of insolvency is arguably de minimis and the way that demand deposits are priced and the way they are used as a substitute for cash reflects this risk analysis. There remains a related, though separate, risk that a bank may face a liquidity problem but depositors (to the extent they even think about this) will assume that central bank Lender of Last Resort liquidity support covers this.

Admati and Hellwig do not, to the best of my knowledge, consider the implications of these features of bank funding. In their defence, I don’t imagine that the Australian banking system was front of mind when they wrote their papers but depositor preference and deposit insurance are not unique Australian innovations. However, once you consider these factors, the conclusion I draw is that the cost of a substantial share of a bank’s debt financing is relatively (if not completely) insensitive to changes in the amount of equity the bank employs in its financing structure.

One consequence is that the higher levels of common equity that Australian banks employ now, compared to the position prior to the GFC, has not resulted in any decline in the cost of deposit funding in the way that M&M say that it should. In fact, the more conservative funding and liquidity requirements introduced under Basel III have required all banks to compete more aggressively for the forms of deposit funding that are deemed by the prudential requirements to be most stable thereby driving up the cost.

The point here is not whether these changes were desirable or not (for the record I have no fundamental issue with the Unquestionably Strong capital benchmark nor with more conservative funding and liquidity requirements). The point is that the cost of deposit funding, in Australian banking at least, has not declined in the way that Admati and Hellwig’s analytical approach and universal law demands that it should.

Summing up, it is possible that other forms of funding have declined in cost as Admati and Hellwig claim should happen, but there is both an analytical rationale and hard evidence that this does not appear to be the case, for Australian bank deposits at least.

The next post will consider the other main (non equity) components of a bank funding structure and explore how their risk/cost has evolved in response both to the lessons that investors and rating agencies took away from the GFC and to the changes in bank regulation introduced by Basel III. A subsequent post will review issues associated with measuring the Expected Loss and hence the true “Through the Cycle” profitability of a bank before I attempt to bring all of the pieces together.

There is a lot of ground to cover yet. At this stage, I have simply attempted to lay out a case for why the cost of bank deposits in Australia has not obeyed the universal analytical law posited by Admati and Hellwig as the logical consequence of a bank holding more equity in its financing structure but if you disagree tell me what I am missing …

Tony

Post script: The arguments I have laid out above could be paraphrased as “banks deposits differ from other kinds of debt because banks themselves create deposits by lending” which Admati and Hellwig specifically enumerate as Flawed Claim #6. I don’t think their rebuttal of this argument adds much to what is discussed above but for the sake of completeness I have copied below the relevant extract from their paper where they set out why they believe this specific claim is flawed. Read on if you want more detail or have a particular interest in this topic but I think the main elements of the debate are already covered above. If you think there is something here that is not covered above then let me know.

Flawed Claim 6: Bank deposits differ from other kinds of debt because banks create deposits by lending.

What is wrong with this claim? This claim is often made in opposition to a “loanable funds” view of banks as intermediaries that collect deposits in order to fund their loans. Moreover, this “money creation through lending” is said to be the way money from the central bank gets into the economy.19 The claim rests on a confusion between stocks and flows. Indeed, if a commercial bank makes a loan to a nonfinancial firm or to a private household it provides its borrowers with a claim on a deposit account. Whereas this fact provides a link between the flow of new lending and the flow of new deposits, it is hardly relevant for the bank’s funding policy, which concerns the stocks of different kinds of debt and equity that it has outstanding, which must cover the stocks of claims on borrowers and other assets that the bank holds.

A nonfinancial firm or household that receives a loan from a bank will typically use the associated claim on a deposit account for payments to third parties. The recipients of these payments may want to put some of the money they get into deposits, but they may instead prefer to move the money out of the banking system altogether, e.g., to a money market fund or a stock investment fund. 20

From the perspective of the individual bank, the fact that lending goes along with deposit creation does not change the fact that the bank owes its depositors the full amount they deposited. The key difference between deposits and other kinds of debt is not that deposits are “like money” or that deposits may be created by lending, but rather that the bank provides depositors with services such as payments through checks and credit cards or ATM machines that make funds available continuously. The demand for deposits depends on these services, as well as the interest that the bank may offer, and it may also depend on the risk of the bank becoming insolvent or defaulting.21

The suggestion that bank lending is the only source of deposit creation is plainly false.22 Deposits are created when people bring cash to the bank, and they are destroyed when people withdraw cash. In this case, the reduction in deposits – like any reduction in funding – goes along with a reduction in the bank’s assets, i.e., a shortening of its balance sheet, but this reduction affects the bank’s cash reserves rather than its lending. The impact of such withdrawals on banks and entire banking systems are well known from the Great Depression or from the recent experience of Greece. In Greece in the spring and summer of 2015, depositors also were worried about the prospect that in the event of the country’s exit from the euro, their denomination of their deposits would be changed, whereas a stack of bills under a matrass would not be affected.