The rise of the normal distribution

“We were all Gaussians now”

This post focuses on a joint paper written in 2012 by Andrew Haldane and Benjamin Nelson titled “Tails of the unexpected”. The topic is the normal distribution which is obviously a bit technical but the paper is still readable even if you are not deeply versed in statistics and financial modelling. The condensed quote below captures the central idea I took away from the paper.

“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”

The idea that the normal distribution should be used with care, and sometimes not at all, when seeking to analyse economic and financial systems is not news. The paper’s discussion of why this is so is useful if you have not considered the issues before but probably does not offer much new insight if you have.

What I found most interesting was the back story behind the development of the normal distribution. In particular, the factors that Haldane and Nelson believe help explain why it came to be so widely used and misused. Reading the history reminds us of what a cool idea it must have been when it was first discovered and developed.

“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
Haldane and Nelson highlight a semantic shift in the 1870’s where the term “normal” began to be independently applied to this statistical distribution. They argue that adopting this label helped embed the idea that the “normal distribution” was the “usual” outcome that one should expect to observe. 
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”
“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”

Notwithstanding its widespread use today, in Haldane and Nelson’s account, economics and finance were not early adopters of the statistical approach to analysis but eventually become enthusiastic converts. The influence of physics on the analytical approaches employed in economics is widely recognised and Haldane cites the rise of probability based quantum physics over old school deterministic Newtonian physics as one of the factors that prompted economists to embrace probability and the normal distribution as a key tool.

” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.”
“Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”

Assessing the Evidence

Having outlined the story behind its development and increasingly widespread use, Haldane and Nelson then turn to the weight of evidence suggesting that normality is not a good statistical description of real-world behaviour. In its place, natural and social scientists have often unearthed behaviour consistent with an alternative distribution, the so-called power law distribution.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
This section of the paper probably does not introduce anything new to people who have spent any time looking at financial models. It does however beg some interesting questions. For example, to what extent bank loan losses are better described by a power law and, if so, what does this mean for the measures of expected loss that are employed in banking and prudential capital requirements; i.e. how should banks and regulators respond if “…the means and variances … tell us rather little about the statistical future”? This is particularly relevant as banks transition to Expected Loss accounting for loan losses.
We can of course estimate the mean loss under the benign part of the credit cycle but it is much harder to estimate a “through the cycle” average (or “expected” loss) because the frequency, duration and severity of the cycle downturn is hard to pin down with any precision. We can use historical evidence to get a sense of the problem; we can for example talk about moderate downturns say every 7-10 years with more severe recessions every 25-30 years and a 75 year cycle for financial crises. However the data is obviously sparse so it does not allow the kind of precision that is part and parcel of normally distributed events.

Explaining Fat Tails

The paper identifies the following drivers behind non-normal outcomes:
  • Non- Linear dynamics
  • Self organised criticality
  • Preferential attachment
  • Highly optimised tolerance
The account of why systems do not conform to the normal distribution does not offer much new but I found reading it useful for reflecting on the practical implications. One of the items they called out is competition which is typically assumed by economists to be a wholly benign force. This is generally true but Haldane and Nelson note the capacity for competition to contribute to self-organised criticality.
Competition in finance and banking can of course lead to beneficial innovation and efficiency gains but it can also contribute to progressively increased risk taking (e.g. more lax lending standards, lower margins for tail risk) thereby setting the system up to be prone to a self organised critical state. Risk based capital requirements can also contribute to self organised criticality to the extent they facilitate increased leverage and create incentives to take on tail risk.

Where Next?

Haldane and Nelson add their voice to the idea that Knight’s distinction between risk and uncertainty is a good foundation for developing better ways of dealing with a world that does not conform to the normal distribution and note the distinguishied company that have also chosen to emphasise the importance of uncertainty and the limitations of risk.
“Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge.”
Assuming that the uncertainty paradigm was embraced, Haldane and Nelson consider what the practical implications would be. They have a number of proposals but I will focus on these
  • agent based modelling
  • simple rather than complex
  • don’t aim to smooth out all volatility

Agent based modelling

Haldane and Nelson note that …

In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
In these frameworks, many of the core features of existing models need to be abandoned.
  • The “representative agents” conforming to simple economic laws are replaced by more complex interactions among a larger range of agents
  • The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
  • Linear deterministic models are usurped by non linear tipping points and phase shifts
Haldane and Nelson note that these types of systems are already being employed by physicists, sociologists, ecologists and the like. Since the paper was written (2012) we have seen some evidence that economists are experimenting with “agent based modelling”. A paper by Richard Bookstabber offers a useful outline of his efforts to apply these models and he has also written a book (“The End of Theory”) promoting this path. There is also a Bank of England paper on ABM worth looking at.
I think there is a lot of value in agent based modelling but a few things impede their wider use. One is that the models don’t offer the kinds of precision that make the DSGE and VaR models so attractive. The other is that they require a large investment of time to build and most practitioners are fully committed just keeping the existing models going. Finding the budget to pioneer an alternative path is not easy. These are not great arguments in defence of the status quo but they do reflect certain realities of the world in which people work.

Simple can be more robust than complex

Haldane and Nelson also advocate simplicity in lieu of complexity as a general rule of thumb for dealing with an uncertain world.
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
I am not a big fan of the Leverage Ratio they cite it as one example of regulators beginning to adopt simpler approaches but the broader principle that simple is more robust than complex does ring true.
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance. In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
Structural separation is another simple approach to the problem of making the system more resilient
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
And a reminder about the dangers of over engineering
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
While we can debate the precise way to achieve simplicity, the basic idea does in my view have a lot of potential to improve the management of risk in general and bank capital in particular. Complex intervention rules may simply add to existing uncertainties in the system and the current formulation of how the Capital Conservation Ratio interacts with the Capital Conservation Buffer is a case in point. These two elements of the capital adequacy framework define what percentage of a bank’s earnings must be retained if the capital adequacy ratio is under stress.
In theory the calculation should be simple and intuitive but anyone who has had to model how these rules work under a stress scenario will know how complex and unintuitive the calculation actually is. The reasons why this is so are probably a bit too much detail for today but I will try to pick this topic up in a future post.

Don’t aim to eliminate volatility

Systems which are adapted to volatility will tend to be stronger than systems that are sheltered from it, or in the words of Haldane and Nelson …

“And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption.”

I am a big fan of this idea. Nassim Taleb makes a similar argument in his book “Antifragile” as does Greg Ip in “Foolproof”. It also reflects Nietzsche’s somewhat more poetic dictum “that which does not kills us makes us stronger”.

In conclusion

If you have read this far then thank you. I hope you found it useful and interesting. If you want to delve deeper then you can find my more detailed summary and comments on the paper here. If you think I have any of the above wrong then please let me know.

“Between Debt and the Devil: Money, Credit and Fixing Global Finance” by Adair Turner (2015)

This book is worth reading, if only because it challenges a number of preconceptions that bankers may have about the value of what they do. The book also benefits from the fact that author was the head of the UK Financial Services Authority during the GFC and thus had a unique inside perspective from which to observe what was wrong with the system. Since leaving the FSA, Turner has reflected deeply on the relationship between money, credit and the real economy and argues that, notwithstanding the scale of change flowing from Basel III, more fundamental change is required to avoid a repeat of the cycle of financial crises.

Overview of the book’s main arguments and conclusions

Turner’s core argument is that increasing financial intensity, represented by credit growing faster than nominal GDP, is a recipe for recurring bouts of financial instability.

Turner builds his argument by first considering the conventional wisdom guiding much of bank prudential regulation prior to GFC, which he summarises as follows:

  • Increasing financial activity, innovation and “financial deepening” were beneficial forces to be encouraged
  • More compete and liquid markets were believed to ensure more efficient allocation of capital thereby fostering higher productivity
  • Financial innovations made it easier to provide credit to households and companies thereby enabling more rapid economic growth
  • More sophisticated risk measurement and control meanwhile ensured that the increased complexity of the financial system was not achieved at the expense of stability
  • New systems of originating and distributing credit, rather than holding it on bank balance sheets, were believed to disperse risks into the hands of those best placed to price and manage it

Some elements of Turner’s account of why this conventional wisdom was wrong do not add much to previous analysis of the GFC. He notes, for example, the conflation of the concepts of risk and uncertainty that weakened the risk measurement models the system relied on and concludes that risk based capital requirements should be foregone in favour of a very high leverage ratio requirement. However, in contrast to other commentators who attribute much of the blame to the moral failings of bankers, Turner argues that this is a distraction. While problems with the way that bankers are paid need to be addressed, Turner argues that the fundamental problem is that:

  • modern financial systems left to themselves inevitably create debt in excessive quantities,
  • in particular, the system tends to create debt that does not fund new capital investment but rather the purchase of already existing assets, above all real estate.

Turner argues that the expansion of debt funding the purchase or trading of existing assets drives financial booms and busts, while the debt overhang left over by the boom explains why financial recovery from a financial crisis is typically anaemic and protracted. Much of this analysis seems to be similar to ideas developed by Hyman Minsky while the slow pace of recovery in the aftermath of the GFC reflects a theme that Reinhart and Rogoff have observed in their book titled “This time is different” which analyses financial crises over many centuries.

The answer, Turner argues, is to build a less credit intensive growth model. In pursuing this goal, Turner argues that we also need to understand and respond to the implications of three underlying drivers of increasing credit intensity;

  1. the increasing importance of real estate in modern economies,
  2. increasing inequality, and
  3. global current account imbalances.

Turner covers a lot of ground, and I do not necessarily agree with everything in his book, but I do believe his analysis of what is wrong with the system is worth reading.

Let me start with an argument I do not find compelling; i.e. that risk based capital requirements are unreliable because they are based on a fundamental misunderstanding of the difference between risk (which can be measured) and uncertainty (which cannot):

  • Distinguishing between risk and uncertainty is clearly a fundamental part of understanding risk and Turner is not alone in emphasising its importance
  • I believe that means that we should treat risk based capital requirements with a healthy degree of scepticism and a clear sense of their limitations but that does not render them entirely unreliable especially when we are using them to understand relative differences in risk and to calibrate capital buffers
  • The obvious problem with non-risk based capital requirements is that they create incentives for banks to take higher risk that may eventually offset the supposed increase in soundness attached to the higher capital
  • It may be that Turner discounts this concern because he envisages a lower credit growth/intensity economy delivering less overall systemic risk or because he envisages a more active role for the public sector in what kinds of assets banks lend against; i.e. his support for higher capital may stem mostly from the fact that this reduces the capacity of private banks to generate credit growth

While advocating much higher capital, Turner does seem to part company with M&M purists by expressing doubt that equity investors will be willing to accept deleveraged returns. His reasoning is that returns to equity investments need a certain threshold return to be “equity like” while massively deleveraged ROE still contains downside risks that are unacceptable to debt investors.

Turning to the arguments which I think raise very valid concerns and deserve serious attention.

Notwithstanding my skepticism regarding a leverage ratio as the solution, the arguments he makes about the dangers of excessive credit growth resonate very strongly with what I learned during my banking career. Turner is particularly focussed on the downsides of applying excessive debt to the financing of existing assets, real estate in particular. The argument seems to be similar to (if not based on) the work of Hyman Minsky.

Turner’s description of the amount of money that banks can create as being “infinitely elastic” seems an overstatement to me (especially in the Australian context with the Net Stable Funding Ratio (NSFR) weighing on the capacity to grow the balance sheet) but the general point he is making about the way that credit fuelled demand for a relatively inelastic supply of desirable residential property tends to result in inflated property values with no real social value rings true.

What banks can do about this remains an open question given that resolving the problem with inelastic supply of property is outside their direct control but it is obviously important to understand the dynamics of the market underpinning their largest asset class and it may help them engage more constructively with public policy debates that seek to address the problem.

Turner’s analysis of the downsides of easy monetary policy (the standard response to economic instability) also rings true. He identifies the fact that lower interest rates tend to result in inflated asset values (residential property in particular given its perceived value as a safe asset) which do not address the fundamental problem of over-indebtedness and may serve to increase economic inequality. His discussion of the impact of monetary policy and easy credit on economic inequality is also interesting. The banks providing the credit in the easy money environment may not necessarily be taking undue risk and prudential supervisors have tools to ensure sound lending standards are maintained if they do believe there is a problem with asset quality. What may happen however is that the wealthier segments of society benefit the most under easy money because they have the surplus cash flow to buy property at inflated values while first homebuyers become squeezed out of the market. Again their capacity to address the problem may be limited but Turner’s analysis prompted me to reflect on what increasing economic inequality might mean for bank business models.

In addition to much higher bank capital requirements, Turner’s specific recommendations for moving towards a less credit intensive economy include:

  • Government policies related to urban development and the taxation of real estate
  • Changing tax regimes to reduce the current bias in favour of debt over equity financing (note that Australia is one of the few countries with a dividend imputation system that does reduce the bias to debt over equity)
  • Broader macro prudential powers for central banks, including the power to impose much larger countercyclical capital requirements
  • Tough constraints on the ability of the shadow banking system to create credit and money equivalents
  • Using public policy to produce different allocations of capital than would result from purely market based decisions; in particular, deliberately leaning against the market signal based bias towards real estate and instead favouring other “potentially more socially valuable forms of credit allocation”
  • Recognising that the traditional easy monetary policy response to an economic downturn (or ultra-easy in the case of a financial crisis such as the GFC) is better than doing nothing but comes at a cost of reigniting the growth in private credit that generated the initial problem, creating incentives for risky financial engineering and exacerbating economic inequality via inflating asset prices.

For those who want to dig deeper, I have gone into a bit more detail here on what Turner has to say about the following topics:

  • The way in which inefficient and irrational markets leave the financial system prone to booms and busts
  • The dangers of debt contracts sets out how certain features of these contracts increase the risk of instability and hamper the recovery
  • Too much of the wrong sort of debt describes features of the real estate market that make it different from other asset classes
  • Liberalisation, innovation and the credit cycle on steroids recaps on the philosophy that drove the deregulation of financial markets and what Turner believes to be the fundamental flaws with that approach. In particular his conclusion that the amount of credit created and its allocation is “… too important to be left to bankers…”
  • Private credit and money creation offers an outline of how bank deposits evolved to play an increasing role (the key point being that it was a process of evolution rather than overt public policy design choices)
  • Credit financed speculation discusses the ways in which credit in modern economies tends to be used to finance the purchase of existing assets, in particular real estate, and the issues that flow from this.
  • Inequality, credit and more inequality sets out some ways in which the extension of credit can contribute to increasing economic inequality
  • Capital requirements sets out why Turner believes capital requirements should be significantly increased and why capital requirements (i.e. risk weights) for some asset classes (e.g. real estate) should be be calibrated to reflect the social risk of the activity and not just private risks captured by bank risk models
  • Turner defence against the argument that his proposals are anti-markets and anti-growth.

“The End of Alchemy” by Mervyn King

Anyone interested in the conceptual foundations of money and banking will I think find this book interesting. King argues that the significant enhancements to capital and liquidity requirements implemented since the GFC are not sufficient because of what he deems to be fundamental design flaws in the modern system of money and banking.

King is concerned with the process by which bank lending creates money in the form of bank deposits and with the process of maturity transformation in banking under which long term, illiquid assets are funded to varying degrees by short term liabilities including deposits. King applies the term “alchemy” to these processes to convey the sense that the value created is not real on a risk adjusted basis.

He concedes that there will be a price to pay in foregoing the “efficiency benefits of financial intermediation” but argues that these benefits come at the cost of a system that:

  • is inherently prone to banking crises because, even post Basel III, it is supported by too little equity and too little liquidity, and
  • can only be sustained in the long run by the willingness of the official sector to provide Lender of Last Resort liquidity support.

King’s radical solution is that all deposits must be 100% backed by liquid reserves which would be limited to safe assets such as government securities or reserves held with the central bank. King argues that this removes the risk/incentive for bank runs and for those with an interest in Economic History he acknowledges that this idea originated with “many of the most distinguished economists of the first half the twentieth century” who proposed an end to fractional reserve banking under a proposal that was known as the “Chicago Plan”. Since deposits are backed by safe assets, it follows that all other assets (i.e. loans to the private sector) must be financed by equity or long term debt

The intended result is to separate

  • safe, liquid “narrow” banks issuing deposits and carrying out payment services
  • from risky, illiquid “wide” banks performing all other activities.

At this point, King notes that the government could in theory simply stand back and allow the risk of unexpected events to impact the value of the equity and liabilities of the banks but he does not advocate this. This is partly because volatility of this nature can undermine consumer confidence but also because banks may be forced to reduce their lending in ways that have a negative impact on economic activity. So some form of central bank liquidity support remains necessary.

King’s proposed approach to central bank liquidity support is what he colloquially refers to as a “pawnbroker for all seasons” under which the  central bank agrees up front how much it will lend each bank against the collateral the bank can offer;

King argues that

“almost all existing prudential capital and liquidity regulation, other than a limit on leverage, could be replaced by this one simple rule”.

which “… would act as a form of mandatory insurance so that in the event of a crisis a central bank would be free to lend on terms already agreed and without the necessity of a penalty rate on its loans. The penalty, or price of the insurance, would be encapsulated by the haircuts required by the central bank on different forms of collateral”

leaving banks “… free to decide on the composition of their assets and liabilities… all subject to the constraint that alchemy in the private sector is eliminated”

Underpinning King’s thesis are four concepts that appear repeatedly

  • Disequilibrium; King explores ways in which economic disequilibrium repeatedly builds up followed by disruptive change as the economy rebalances
  • Radical uncertainty; this is the term he applies to Knight’s concept of uncertainty as distinct from risk. He uses this to argue that any risk based approach to capital adequacy is not built on sound foundations because it will not capture the uncertain dimension of unexpected loss that we should be really concerned with
  • The “prisoner’s dilemma” to illustrate the difficulty of achieving the best outcome when there are obstacles to cooperation
  • Trust; he sees trust as the key ingredient that makes a market economy work but also highlights how fragile that trust can be.

My thoughts on King’s observations and arguments

Given that King headed the Bank of England during the GFC, and was directly involved in the revised capital and liquidity rules (Basel III) that were created in response, his opinions should be taken seriously. It is particularly interesting that, notwithstanding his role in the creation of Basel III, he argues that a much more radical solution is required.

I think King is right in pointing out that the banking system ultimately relies on trust and that this reliance in part explains why the system is fragile. Trust can and does disappear, sometimes for valid reasons but sometimes because fear simply takes over even when there is no real foundation for doubting the solvency of the banking system. I think he is also correct in pointing out that a banking system based on maturity transformation is inherently illiquid and the only way to achieve 100% certainty of liquidity is to have one class of safe, liquid “narrow” banks issuing deposits and another class of risky, illiquid institution he labels “wide” banks providing funding on a maturity match funded basis. This second class of funding institution would arguably not be a bank if we reserve that term for institutions which have the right to issue “bank deposits”.

King’s explanation of the way bank lending under the fractional reserve banking system creates money covers a very important aspect of how the modern banking and finance system operates. This is a bit technical but I think it is worth understanding because of the way it underpins and shapes so much of the operation of the economy. In particular, it challenges the conventional thinking that banks simply mobilise deposits. King explains how banks do more than just mobilise a fixed pool of deposits, the process of lending in fact creates new deposits which add to the money supply. For those interested in understanding this in more depth, the Bank of England published a short article in its Quarterly Bulletin (Q1 2014) that you can find at the following link

He is also correct, I think, in highlighting the limits of what risk based capital can achieve in the face of “radical uncertainty” but I don’t buy his proposal that the leverage ratio is the solution. He claims that his “pawnbroker for all seasons” approach is different from the standardised approach to capital adequacy but I must confess I can’t see that the approaches are that different. So even if you accept his argument that internal models are not a sound basis for regulatory capital, I would still argue that a revised and well calibrated standardised approach will always be better than a leverage ratio.

King’s treatment of the “Prisoner’s Dilemma” in money and banking is particularly interesting because it sets out a conceptual rationale for why markets will not always produce optimal outcomes when there are obstacles to cooperation. This brings to mind Chuck Prince’s infamous statement about being forced to “keep dancing while the music is playing” and offers a rationale for the role of regulation in helping institutions avoid situations in which competition impedes the ability of institutions to avoid taking excessive risk. This challenges the view that market discipline would be sufficient to keep risk taking in check. It also offers a different perspective on the role of competition in banking which is sometimes seen by economists as a panacea for all ills.

I have also attached a link to a review of King’s book by Paul Krugman