“For almost a century, the world of economics and finance has been dominated by randomness … But as Nassim Taleb reminded us, it is possible to be Fooled by Randomness (Taleb (2001)). For Taleb, the origin of this mistake was the ubiquity in economics and finance of a particular way of describing the distribution of possible real world outcomes. For non-nerds, this distribution is often called the bell-curve. For nerds, it is the normal distribution. For nerds who like to show-off, the distribution is Gaussian.”
The condensed quote above captures the central idea I took away from this joint paper by Haldane and Nelson. The idea that the normal distribution should be used with care, and perhaps not at all, when seeking to analyse economic and financial systems is not news and the paper does not offer much new insight on the reasons why the real world may not always conform to the bell curve.
What I found interesting was the back story behind the development of the normal distribution and the factors that Haldane and Nelson believe help explain why it came to be so ubiquitous.
A Short History of Normality
(a) Normality in Physical Systems
“By simply taking repeat samplings, the workings of an uncertain and mysterious world could seemingly be uncovered”.
- Gauss … observed that the distribution of estimates varied widely, but tended to cluster around a mean with symmetry either side and also forged a link between the normal curve and a particular approach to statistical inference – the “least squares” method
- In 1810, in parallel to Gauss’s work, Laplace … showed mathematically that the sum of a large number of mutually independent, identically distributed random variables is approximately normally distributed; the first appearance of what is now known as the central limit theorem.
“To scientists seeking to explain the world, the attraction of the normal curve was obvious. It provided a statistical map of a physical world which otherwise appeared un-navigable. It suggested regularities in random real-world data. Moreover, these patterns could be fully described by two simple metrics – mean and variance. A statistical window on the world had been opened.”
(b) Normality in Social Systems
“In the 18th century, normality had been formalised. In the 19th century, it was socialised.”“Up until the late 19th century, no statistical tests of normality had been developed.
Having become an article of faith, it was deemed inappropriate to question the faith.
As Hacking put it, “thanks to superstition, laziness, equivocation, befuddlement with tables of numbers, dreams of social control, and propaganda from utilitarians, the law of large numbers became a synthetic a priori truth. We were all Gaussians now.”
(c) Normality in Economic and Financial Systems
“Early models of the economic system developed by Classical economists were qualitative and deterministic. This followed the tradition in Newtonian physics of explaining the world using Classical deterministic laws. Jevons, Walras, Edgeworth and Pareto “transmuted the physics of energy into the social mechanics of utility” (Mirowski (1989)).”
” … in the early part of the 20th century, physics was in the throes of its own intellectual revolution. The emergence of quantum physics suggested that even simple systems had an irreducible random element. In physical systems, Classical determinism was steadily replaced by statistical laws. The natural world was suddenly ruled by randomness.Economics followed in these footsteps, shifting from models of Classical determinism to statistical laws.”
“The Cowles Foundation … pioneered the development of large-scale macro-econometric models, including for policy analysis.At the core of the macro-economic models developed by the Cowles Foundation were two features: least-squares methods for estimation of the economy’s propagation mechanisms and normality of the random impulses to this system.These two elements were in principle separable – the least squares method does not in fact rely on any distributional assumption about errors. But in the years that followed, econometric estimation and normality became inseparable.”As evidence of that, tests for normality began to be applied to econometrically-estimated models in the 1970s. Tellingly, these tests were used as a diagnostic check on the adequacy of the model. A finding of abnormality in the errors, in economics as in psychiatry, was assumed to imply a failure of the model, a disease requiring treatment. As in the natural sciences in the 19th century, far from being a convenient statistical assumption, normality had become an article of faith. Normality had been socialised.”
“The Arrow-Debreu model thereby allowed an explicit price to be put on risk, while ignoring uncertainty. Risky (Arrow) securities could now be priced with statistical precision. These contingent securities became the basic unit of today’s asset pricing models.”
“Whether by accident or design, finance theorists and practitioners had by the end of the 20th century evolved into fully paid-up members of the Gaussian sect.”
Assessing the Evidence
- In a Gaussian world, the probability of large events decays exponentially with their size, making large events increasingly rare at a rapid rate. Under power laws, these large events are much more likely. Many of the concepts familiar from a Gaussian world are turned on their head.
- Under the normal distribution means and variances are all that matter. For power laws with sufficiently fat tails, the mean and variance may not even exist.
“In consequence, Laplace’s central limit theorem may not apply to power law-distributed variables. There can be no “regression to the mean” if the mean is ill-defined and the variance unbounded. Indeed, means and variances may then tell us rather little about the statistical future. As a window on the world, they are broken”
Explaining Fat Tails
“So what explains these fat-tailed outcomes in natural and social systems?
One factor holds the key – interactions. The central limit theorem is predicated on the assumption of independence of observations. In complex systems, natural and social, that assumption is almost certain to be violated. Systems are systems precisely because they are interdependent. In a nutshell, that is why so few systems behave normally in the statistical sense. This interaction can take a variety of forms.”
Haldane identifies the following drivers behind non-normal outcomes:
- Non- Linear dynamics
- Self organised criticality
- Preferential attachment
- Highly optimised tolerance
(a) Non-Linear Dynamics
(b) Self-Organised Criticality
“The build-up to the financial crisis provided another example of self-organised criticality. A competitive search for yield among financial firms caused them to increase risk to hit return targets. The sand pile of search for yield among financial firms caused them to increase risk to hit return targets returns grew ever-higher as the system self-organised on a high-risk state. Having reached this critical state, a small addition of risk – a single sub-prime grain of sand – was sufficient to cause the whole edifice to collapse in an uncontrolled cascade (Haldane (2012a)).”
(c) Preferential Attachment
Economists typically assume that choices are made rationally based on individual preferences and the wisdom of the crowd but “preferential attachment” offers an alternative path in which choices follow social patterns that may or may not have intrinsic merit.
“Neither non-linear dynamics nor self-organised criticality rely on human behaviour or social interaction. Introducing human behaviour is likely to make for stronger interactions within the system, further fattening the tail. Social networks, unlike physical or biological networks, are in many ways defined by these interactions. Without them, the network simply would not exist.”
“Social networks, be it school classrooms, churches, pubs or the world wide web, have been extensively studied (Jackson (2010)) … exhibit some common topological features. For example, most have a large number of poorly-connected agents and a relatively small number of highly-connected agents. Why so? One explanation is so-called preferential attachment.”“Preferential attachment explains the distribution of web-links, academic citations and Facebook friends (Barabasi and Albert (1999)). It explains the distribution of city sizes (Zipf’s Law). It also explains the outcomes from strategic games, for both adults (nuclear proliferation) and children (paper/scissors/stone). All are well-known to be power law distributed.”
“These types of preferential attachment have a history in economics too. Keynes viewed the process of forming expectations as more beauty pageant than super-computer (Keynes (1936)). Agents form their guess not on an objective evaluation of quality (Stephen Fry) but according to whom they think others might like (Kim Kardashian). Stephen Fry has just over 4 million Twitter followers. Kim Kardashian has 15 million.This sheep-like logic makes for multiple equilibria, as expectations herd themselves into different pens. Some of these equilibria may be bad ones.The classic example in finance is the Diamond and Dybvig (1983) bank run. If depositors believe others will run, so will they. Financial unpopularity then becomes infectious.”
(d) Highly-Optimised Tolerance
Man made management and control systems can also create the potential for non-linear, chaotic results. This is especially so where the system is built on a level of control and precision that is not justified. Greg Ip’s book (Foolproof) focuses on these kinds of effects.
Some systems organise themselves into a critical state by themselves. Other are nudged into doing so by the hand of man. In other words, in some systems criticality may be man-made. These are often described as systems of highly-optimised tolerance (Carlson and Doyle (2002)).
“… outcomes, where critical states are generated by human intervention, were also a feature of the financial crisis. Pre-crisis, regulators set capital ratios based on an assessment of the riskiness of banks’ assets. These assessments were imperfect. With hindsight, assets that were underweighted in risk terms (trading book assets, sovereign debt) encouraged banks to invest in them. The financial sector organised itself on a critical, high-risk state due to well-meaning, but imperfect, regulation.”
“… what can be done to better recognise and manage non-normalities?”
(a) Non-Normality in Economics and Finance
“Taking a step forward may require economics and finance to first take a step back. In 1921, Frank Knight drew an important distinction between risk on the one hand and uncertainty on the other (Knight (1921)). Risk arises when the statistical distribution of the future can be calculated or is known. Uncertainty arises when this distribution is incalculable, perhaps unknown.”
Many of the biggest intellectual figures in 20th century economics took this distinction seriously. Indeed, they placed uncertainty centre-stage in their policy prescriptions. Keynes in the 1930s, Hayek in the 1950s and Friedman in the 1960s all emphasised the role of uncertainty, as distinct from risk, when it came to understanding economic systems. Hayek criticised economics in general, and economic policymakers in particular, for labouring under a “pretence of knowledge” (Hayek (1974)).
Yet it is risk, rather than uncertainty, that has dominated the economics profession for much of the past 50 years. By assuming future states of the world were drawn from known distributions, Arrow and Debreu enabled risk to be priced with statistical precision and uncertainty to be conveniently side-stepped. Uncertainty was, quite literally, ruled out of the equation. But if economic and financial systems operate on the border between order and disorder, ignoring uncertainty is deeply unrealistic.
In response to the crisis, there has been a groundswell of recent interest in modelling economic and financial systems as complex, adaptive networks. For many years, work on agent-based modelling and complex systems has been a niche part of the economics and finance profession. The crisis has given these models a new lease of life in helping explain the discontinuities evident over recent years (for example, Kirman (2011), Haldane and May (2011))
- Quetelet’s l’homme moyen is replaced by interactions among non-representative agents
- The single, stationary equilibrium gives way to Lorenz-like multiple, non-stationary equilibria.
- Frisch-Slutsky linearity is usurped by sand pile-style non linear tipping points.
(b) Non-Normality and Risk Management
“Taken together, non-normality suggests the need for a fairly fundamental rethink of the core risk management tools currently used by many financial firms. That includes, importantly, models used to set regulatory capital requirements. Even post-crisis, too many of these models remain at risk of being misled by normality, fooled by randomness. That was a key fault-line during the crisis and, as recent experience attests, remains a key fault-line today.”
(c) Non-Normality and Systemic Risk
Haldane and Nelson wrap up by considering what a “non-normal” world means for the bodies charged with regulating and supervising the system. One idea is to collect the data required to monitor and model the moving pieces of the financial system using weather forecasting as an example of the kinds of progress that might be achieved
“Some of the tail risks facing financial firms are genuinely difficult to calibrate accurately. That is because they are created endogenously within the system as a result of the behaviour of other participants (Danielsson et al (2009)). Because those behaviours are unobservable, so too are the tail risks facing individual banks. That is a potentially serious risk management gap.This gap can most obviously be filled by some systemic oversight agency, able to monitor and potentially model the moving pieces of the financial system. Pre-crisis, there were few, if any, such systemic regulatory bodies in place. But over the past few years, a number have emerged charged with just this task – the Financial System Oversight Council (FSOC) in the US, the European Systemic Risk Board (ESRB) in Europe and the Financial Policy Committee (FPC) in the UK, to name three.One useful role these bodies can play is to provide a guide to the contours of systemic risk – a systemic risk map.This map could provide a basis for risk management planning by individual financial firms. As in weather forecasting, the systemic risk regulator could provide early risk warnings to enable defensive actions to be taken. Indeed, the evolution of weather forecasting may provide useful lessons on the directions finance might take – and some grounds for optimism.”
Finance could usefully follow in these footsteps. There are already steps underway internationally to widen and deepen the array of financial data available to systemic risk regulators, filling gaps in the global network map. As in weather forecasting, this would help improve estimates of the initial conditions of the financial system. And as in weather forecasting, it is important these data are captured in a common financial language to enable genuinely global maps to be drawn (Ali, Haldane and Nahai-Williamson (2012)).These data then need to be brought together using a set of behavioural models of the economic and financial system. Economics does not have the benefit of meteorologists’ well-defined physical laws. But by combining empirically-motivated behavioural rules of thumb, and balance sheets constraints, it should be possible to begin constructing fledgling models of systemic risk.
Navigating the contours of systemic risk is one thing. Re-profiling these contours is quite another. International regulators have only recently begun the task of calibrating regulatory rules with an eye to systemic, as distinct from institution-specific, risk. There is a considerable distance to travel. Regulatory rules of the past sought to reflect risk. Regulatory rules of the future will need to seek to reflect uncertainty.
That calls for a quite different, and sometimes seemingly perverse, approach. Under uncertainty, many of our intuitive regulatory rules of thumb are turned on their head: slower can be faster, less can be more, slack can be tight. Instinctively a complex financial system seems likely to require complex control rules. And under risk, that intuition is roughly right. Under uncertainty, however, it is precisely wrong. Then, the optimal control rule is typically a simple one (DeMiguel et al (2009)). Less is more (Gigerenzer and Brighton (2008)).
The reason less can be more is that complex rules are less robust to mistakes in specification. They are inherently fragile. Harry Markowitz’s mean-variance optimal portfolio model has informed millions of investment decisions over the past 50 years – but not, interestingly, his own. In retirement, Markowitz instead used a much simpler equally-weighted asset approach. This, Markowitz believed, was a more robust way of navigating the fat-tailed uncertainties of investment returns (Benartzi and Thaler (2001)).
The mainstay of regulation for the past 30 years has been more complex estimates of banks’ capital ratios. These are prone to problems of highly-optimised tolerance.
In part reflecting that, regulators will in future require banks to abide by a far simpler backstop measure of the leverage ratio. Like Markowitz’s retirement portfolio, this equally-weights the assets in a bank’s portfolio. Like that portfolio, it too will hopefully be more robust to fat-tailed uncertainties.
A second type of simple, yet robust, regulatory rule is to impose structural safeguards on worst-case outcomes. Technically, this goes by the name of a “minimax” strategy (Hansen and Sargent (2011)). The firebreaks introduced into some physical systems can be thought to be playing just this role. They provide a fail-safe against the risk of critical states emerging in complex systems, either in a self-organised manner or because of man-made intervention. These firebreak-type approaches are beginning to find their way into the language and practice of regulation.
Finally, in an uncertain world, fine-tuned policy responses can sometimes come at a potentially considerable cost. Complex intervention rules may simply add to existing uncertainties in the system. This is in many ways an old Hayekian lesson about the pretence of knowledge, combined with an old Friedman lesson about the avoidance of policy harm. It has relevance to the (complex, fine-tuned) regulatory environment which has emerged over the past few years.
The rule we have is actually pretty complex. A simple rule would make the system more transparent and intuitive.
And the argument can be taken one step further. Attempts to fine-tune risk control may add to the probability of fat-tailed catastrophes. Constraining small bumps in the road may make a system, in particular a social system, more prone to systemic collapse. Why? Because if instead of being released in small bursts pressures are constrained and accumulate beneath the surface, they risk an eventual volcanic eruption. Taleb and Blyth (2011) apply this line to logic to explain the build-up and after-effects of the Arab Spring.
Comment: Apart from Taleb, this is also one of the central arguments that Greg Ip makes in his book Foolproof.
“Normality has been an accepted wisdom in economics and finance for a century or more. Yet in real-world systems, nothing could be less normal than normality. Tails should not be unexpected, for they are the rule. As the world becomes increasingly integrated – financially, economically, socially – interactions among the moving parts may make for potentially fatter tails. Catastrophe risk may be on the rise.If public policy treats economic and financial systems as though they behave like a lottery – random, normal – then public policy risks itself becoming a lottery. Preventing public policy catastrophe requires that we better understand and plot the contours of systemic risk, fat tails and all. It also means putting in place robust fail-safes to stop chaos emerging, the sand pile collapsing, the forest fire spreading.Until then, normal service is unlikely to resume.”