“The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”, by Richard Bookstaber (2017)

This webpage is a tool for me to dig deeper into the book. It contains some extracts from the book coupled with linking comments and or observations. In places I have highlighted text in the extracts that I thought captured key ideas.

Section I: Introduction

1: Crises and Sunspots

Bookstaber starts with a recap on some of the history of the models that economists have developed to help them understand what is going on in the economy

“Our standard economic model, the neoclassical model, was forged in Victorian England during a time of industrial and economic revolutions—and the crises and the cruel social and economic disparities that came with them. This economic approach arose because the classical political economy of Adam Smith and David Ricardo failed in this new reality. The neoclassical model was championed by the Englishman William Stanley Jevons, who experienced the effects of these crises firsthand, and was prepared to bring new tools to the job. Jevons was the first modern economist, introducing mathematics into the analysis and initiating what became known as the marginalist revolution—a huge leap forward that reshaped our thinking about the values of investment and productivity. Nonetheless, despite all the areas in which Jevons’s approach improved our thinking, the economic model he originated still failed to predict or elucidate crises.”

He argues that an understanding of how we arrived at the standard economic approach applied today is useful for understanding its limitations. The economic revolution outlined above was driven, he argues, by the railroad. This was the disruptive technology of the time, analogous to the digital revolution of our time.

Railroads were also the source of repeated crises. Then as now, there was more capital chasing the dreams of the new technology than there were solid places to put it to work. And it was hard to find a deeper hole than the railroads. Many of the railroad schemes were imprudent, sometimes insane projects, the investments often disappearing without a trace.

In 1848, in the midst of this revolution and its cycle of crises, the great economist and intellectual John Stuart Mill published his Principles of Political Economy, a monument to the long and rich tradition of classical political economy of Adam Smith, Jean-Baptiste Say, Thomas Robert Malthus, and David Ricardo. With this publication, economics reached a highly respectable, congratulatory dead end

But cracks began to emerge in the pillars of Mill’s theory over the next two decades. Mill’s economics failed to incorporate the changes introduced by the Industrial Revolution. Labor was at the heart of Mill’s economic theory but labour was becoming less important in explaining value and production.

The Birth of Modern Economics

Bookstaber identifies economics moving in two directions in the middle of the 19th century; one was based on historical analysis and highlighted the importance of capital and human factors, the other was more mathematical where the human side was more abstract.

“Industrial age economics moved away from Mill in two directions. The one traveled by Marx, based on historical analysis and with a focus on the human consequences of the dominance of capital, fomented revolution that would engulf the world. The other, based on mathematics, emulated the mechanics of the natural sciences while ignoring the human aspect completely, forming the foundation for today’s standard economic model, that of neoclassical economics. This was the way pushed forward by William Stanley Jevons.”

“Jevons not only brought mathematical rigour to the field but also was the first economist to focus on the sources of economic crises … He believed that if economics could not explain market crises … then it was not a complete theory”. From Bookstaber’s perspective, Jevons was correct to recognise the importance of being able to explain market crises but his emphasis on mathematical rigour took him, and much of the economic profession, down the wrong path.

Economic theory asserts a level of consistency and rationality that not only leaves the cascades and propagation over the course of a crisis unexplained but also asserts that they are unexplainable. Everything’s rational, until it isn’t; economics works, until it doesn’t. … The dominant model postulates a world in which we are each rolled up into one representative individual who starts its productive life having mapped out a future path of investments and consumption with full knowledge of all future contingencies and their likelihood. In this fantasy world, each of us works to produce one good and conveniently—because who wants to worry about financial crises?—lives in a world with no financial system and no banks!”

Bookstaber’s view is that traditional economic theory has taken a path “bound by its own methods and structure” which renders it incapable of understanding financial crises.

“our path cannot be determined with mathematical shortcuts; we have to follow the path to see where it leads …This book explores what it would mean to follow the path to see where it leads. It provides a nontechnical introduction to agent-based modeling, an alternative to neoclassical economics that shows great promise in predicting crises, averting them, and helping us recover from them. This approach … draws on four concepts that have a technical ring but are eminently intuitive: emergent phenomena, ergodicity, radical uncertainty, and computational irreducibility.”

2: Being Human

“We’ve all learned from … crises. We change our strategies, throw out some financial instruments and cook up some new ones. So each crisis really is different. And as we dig out from one, we are sowing the seeds for the next.

Yet the regulators and academics always seem to be fighting the last war. After 2008, all we talked about was reducing bank leverage and coming up with new risk measures based on bank leverage and whatever else. But I doubt it will be bank leverage that hits us over the head the next time around.”

“Look at any business, talk to anyone you know, and you will see prudent, thoughtful actors. But look at the sum total of their actions, and sometimes it will seem without rhyme or reason, bordering on chaos. The sum of individually rational actions can be the genesis of a crisis.”

“Strange things happen during a crisis. Economics 101 tells you that when prices drop, more buyers will reveal themselves. What it doesn’t tell you is that in a crisis, as prices drop, you have more sellers. Not that everybody wants to sell; some are forced to. Others who would buy at the bargain price bide their time, staying on the sidelines.

Finance 101 tells you that you reduce risk by diversifying and hedging. But in crises the markets, usually rich and varied, governed by many factors, fuse, plasmalike, into a white-hot ball of risk.”

The Four Horsemen of the Econopalypse

“Social and economic interactions, coloured by experience, are parts of human nature that, when joined together, create complexity hat exceeds the limits of our understanding. ….[Bookstabber argues] these are the reasons for using agent-based models, models that allow for individuals who are each plotting their own course, making adjustments along the way, and affecting the world and others through their actions. Agent-based models do this by applying the simulation approach that is rooted in the analysis of complex and adaptive systems. These are models that respect our very human limits.”

Bookstaber identifies four broad phenomena he argues are endemic to financial crises

  • Emergent phenomena.
    “When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.
  • Non-ergodicity.
    “An ergodic process … is one that does not vary with time or experience.
    Our world is not ergodic—yet economists treat it as though it is.”
  • Radical uncertainty.
    “Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”
  • Computational irreducibility.
    “There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.
Modeling Crises

“If we have a level of complexity that is computationally irreducible, then imposing simplifications and regularity assumptions that make it suitable for mathematical machinery risks assuming away the essence of the problem.

To deal with these limitations we need an approach that allows us to follow the path rather than rely on mathematical shortcuts, to extract from the individual to the system level the behavior that emerges, and to do so without relying on stable probabilities. The methods to do this, methods that are at the core of complexity science, are computer simulations. And the specific application of those simulations to problems like this is known as agent-based modeling.”

“The problem suggests the answer: We must start with models of individual, heterogeneous agents and allow them to interact. We must allow that interaction to alter their behavior and to alter their environment. We must follow the path of the individuals from the start without trying to find shortcuts in the process. We also must monitor the models for emergent phenomena. Agent-based modeling is the approach that meets these conditions.”

Bookstabber describes his book as “my manifesto for financial crises, a declaration that the neoclassical economic theory has failed and the new paradigm of agent-based economics may succeed.

Section II: The Four Horsemen

Ch 3 – Social Interactions and Computational Irreducibility

“A computationally irreducible problem is one without mathematical shortcuts, where the only way to determine the outcome is to perform each step of the program…. By contrast, a computationally reducible system is one that can be described by mathematical formulas that give the outcome at any chosen instant of time without working through all the time steps”

Looking back into the centuries of scientific progress, a manifest characteristic of the great theoretical triumphs has been finding computational shortcuts that help understand how systems behave … The primary tool for executing these shortcuts … is mathematics, and mathematics deductively applies a generally axiomatic structure, a structure that begins with the statement of laws

Bookstabber then outlines two examples of seemingly simple problems that are computationally irreducible,

1) The three body problem and

2) Conway’s “Game of Life”

“In view of how even the simplest cases can lead to computational irreducibility, how can it fail to be the case that the path of human—and thus economic—interaction and the implications of human experience are computationally irreducible? If we cannot successfully deal with the computational irreducibility head on, we are missing a key aspect of humanity.”

“Economists push for simple and elegant models in a world that is admittedly not simple or elegant … because that is what is needed to get the math to work.”

“The idea of irreducible systems is very simple and powerful at the same time. If we cannot determine the outcome of so simple an artificial world as Life without actually running it, why do we think we can deal with so complex and interconnected a system as an economy in the throes of crisis without doing the same?”

4: The Individual and the Human Wave: Emergent Phenomena

Emergent phenomena occur when the overall effect of individuals’ actions is different from what the individuals are doing.

“A financial crisis is an emergent phenomenon that has broken through the market’s containment vehicles.”

“The emergent phenomena we are concerned about … are the ones where the rules, which seem sensible and generally work well at the individual level, lead to unexpected chaos at the system level.”

“Emergence explains why we can all be doing what we think makes sense and indeed be doing what manages our risk, and yet have the result be disastrous … what is locally prudent can be globally imprudent

5: Context and Ergodicity

To know if we are in an ergodic world, we can ask a simple question: Does history matter? Does it matter how we got to where we are? In most physical processes it doesn’t.

….The mechanical processes that are the drivers of our physical world are ergodic. … But if we view our experience and context as mattering, the product of human nature is not ergodic; the future is not an extrapolation of the past.

With an ergodic process, all samples are pulled from the same population, and a sufficiently long sample can be used to describe the underlying characteristics of the process, such as its mean or variance, and a sample drawn from any time period will do just as well and give the same results as any other.

Economics assumes that the world is ergodic. This is a very helpful assumption because then you can take expected values that will stand over the long run. You don’t have to know anything about history because what happens next will be determined just be some fixed probability distribution”

Humans are not ergodic, however. We move through the world along a single path, and we get only that one path. Where we are on that path, our experiences, our interactions, the view we have of the world at that moment all determine the context for our actions. That path is not repeatable; we are not taking draws from a distribution. We cannot jump on board at any arbitrary point and see the same story unfold from that point onward.

Logic and Context

“The point is that what we do is context-specific; it is not driven by logic. Logic would have us act consistently, but what we do depends on the world we see and the situation we are in. 

We are not computers and our preferences are not simple functions. A mathematician entering the world of economics begins with a set of axioms. That is just the way mathematics works. And one of those axioms—or one of the assumptions that is necessary to take an axiomatic approach—is that people think like mathematicians. In starting this way, neoclassical economists fail to consider how people actually think, much less how that thinking is intertwined with their environment and the context of their decisions.”

When we delve into how people actually think—work that naturally originated in psychology rather than economics—we find that people employ heuristics: rules of thumb that do not look at all like optimization.

Crises and Context

How has economics gotten so far if it ignores the essential nature of humanity, treating people as well-programmed robots? It might be that it has not gotten as far as it appears. Much of economic analysis is pegged to the time frame of our day-to-day world, where people do not change much. But with a crisis, the effects are manifest to a degree that these limits and failures come to the fore, are laid bare, and the failure of economics becomes evident, as opposed to other times, when it is cloaked by ordinariness. Or the failures can be cast off using behavioral economics and irrationality, chalking things up to market anomalies, as a foil.

The change in context is seen by what matters during a crisis compared with during mundane investing. During a crisis what matters is not relative value, the subtleties of relative expected earnings, or constrictions in the supply chain. What matters is liquidity and risk. People dump risky and illiquid assets, and there is a flight to quality (toward assets that are liquid and less risky). Other considerations fall by the wayside.

6: Human Experience and Radical Uncertainty

“unknown unknowns” … In financial circles, the concept goes by the term Knightian uncertainty to honor the economist who recognized it. More broadly, it is called radical uncertainty. It is meant to describe surprises—outcomes or events that are unanticipated, that cannot be put into a probability distribution because they are outside our list of things that might occur. And so they cannot be modeled.

The Limits to Knowledge

There are some things that we simply cannot know, that define limits to our knowledge … Some relate to logic: we cannot know because we cannot create a consistent and self-contained system of mathematics (the impossibility theorem); we cannot know because there are problems that can be posed algorithmically that are not decidable (the halting problem). Others relate to processes and the physical world: we cannot know because in interacting with the world we change the world (the uncertainty principle); we cannot know because depicting the evolution of the system requires being precise beyond any physical capability (chaos); we cannot know because our models cannot predict things we actually observe (emergent phenomena). And yet others deal with probability: we cannot know because we cannot establish a probability distribution (ergodicity); we cannot know because we cannot fathom the possible events that might lie ahead (radical uncertainty).

Bookstabber offers examples of the limits to knowledge that arise in self referential systems: Godel’s Impossibility Theorem, Turing’s Halting Problem and Heisenberg’s Uncertainty Principle.


Gödel proved that it would be impossible for any system of mathematics to solve every problem. The flavor of Gödel’s basic argument can be captured in the contradictions contained in a schoolboy’s brainteaser. A sheet of paper has the words “The statement on the other side of this paper is true” written on one side and “The statement on the other side of this paper is false” on the reverse. The conflict isn’t resolvable. Or, closer to the point, the following assertion: “This statement is unprovable.” You cannot prove the statement is true, because doing so would contradict it. If you prove the statement is false, then that means its converse is true—it is provable—which again is a contradiction.

The key point of contradiction for these two examples is that they are self-referential.


The uncertainty principle, though broadly applied and draped in metaphysical context, is a well-defined and elegantly simple statement of physical reality—namely, the combined accuracy of a measurement of an electron’s location and its momentum cannot vary far from a fixed value.

Self-Referential Systems, Reflexivity, and the Human Predicament

Gödel, Turing, and Heisenberg all demonstrate facets of the limits to knowledge that occur from a self-referential system, a system that is being used to prove itself, or a system where an observer is part of what he is observing.”

“This is the essential distinction between building models of human systems as opposed to those of the natural world. What we observe, we change. What we change alters how we observe.”

This is the conceptual underpinning for a profound concept introduced to economics thirty years ago: reflexivity. It is also one that has been resoundingly ignored by the field

“reflexivity is a concept related to the cycle of self-referential feedback, such as between our beliefs and our actions. It has a range of connections within philosophy, but the specific notion of reflexivity in Soros’s terms is where observations of the economy lead to ideas that change behavior, which in turn changes the economy itself.”

His application of the concept is based on two principles. The first is the principle of fallibility, which is that any individual’s “perspective is bound to be either biased or inconsistent or both.” The second is the principle of reflexivity … namely that “these imperfect views can influence the situation in which they are related through the actions of the participants.”

Reflexivity creates deep problems for those who are determined to model our actions in a scientific structure founded on timeless and universal axioms. There is fundamental indeterminacy and uncertainty that cannot be breeched.”

Inexperience and Radical Uncertainty

As the Austrian philosopher Karl Popper put it, “Quite apart from the fact that we do not know the future, the future is objectively not fixed. The future is open: objectively open.” We cannot enumerate the states of nature that will arise, much less assign them probabilities.

7: Heuristics: How to Act Like a Human

Having set out his argument for why the world is characterised by radical uncertainty that ultimately frustrates any attempts to reduce it to a neat probabilistic model, Bookstabber turns to what we are to do. His answer is to adopt a heuristic strategy that ignores some information and generates simple rules of thumb. These simple rules do not always produce the optimal, maximum expected value, outcome but they are good enough to navigate a wide range of unforeseeable future scenarios.

Heuristics and Optimisation

“A heuristic is a strategy that ignores some information and generates simple rules of thumb. We rely on heuristics to make decisions that do not wait for the computationally irreducible world to unfold and that are robust in the face of radical uncertainty.

“Heuristics are coarse and context-dependent rules rather than general, deductive solutions. Gigerenzer notes, “A heuristic is ecologically rational to the degree that it is is adapted to the structure of the environment”. Logic, math, and probability, in contrast to heuristics, are all independent of context.

“Heuristics live within the radical uncertainty rather than assuming it away. In applying coarse and robust rules, they do not try to capture all the nuances of the possible states and their probabilities. They use simple approaches that are robust to changes in states or to new, unanticipated states.”

This approach turns out to be better because it recognizes an important aspect of our environment that cannot be captured even in a model of constrained optimization under uncertainty. In an environment where things can happen that we cannot anticipate, to which we cannot assign a probability, the best solution is one that is coarse. And being coarse and robust leads to another anomaly for those who are looking through the optimization lens. In a robust and coarse environment, we may ignore some information, even if it is costless to employ.”

The advantage of taking the route of simple heuristics doesn’t happen because the world is simple. On the contrary, the simplicity of the heuristic is a result of the world being so complex and its risk unfathomable. The point is that in a world where the future is not drawn from the same population as the past, the optimal portfolios are not really optimal. They are rooted in the past where the market is not. So in a sense, these models will almost always be wrong. And if that is true in the normal course of the market, certainly it will be true in a crisis.”


“The appeal to heuristics does not simply derive from the costs of collecting and processing the data needed for optimization. … 

Rather, heuristics are the robust solution to a world filled with radical uncertainty.

… in the real world the future does not look like the past, especially if the future is a crisis and the past is not. And when the future is markedly different from the past, approaches that are calibrated on past information will miss the mark. The same will be true if we dig down to the foundation of these optimization methods: axioms by nature work in a timeless and universal world. Logic, math, and probability are all context independent, but humans are not.”

Section III: Paradigm Past and Future

8: Economics in Crisis

Interaction and the Representative Agent

“we need to note another assumption at the heart of the economic approach … the representative agent. This enabling assumption makes life possible for the economic profession. Economics does not go out and deal with how people behave as individuals. It assumes that it can aggregate all of us into one representative agent. It is a simplifying assumption ... But this one stops the usefulness of economics before it even gets going. It takes more than one person to create a stampede, and it takes more than one person to unleash the world into the dynamics of a crisis. …. Gathering all the agents in a system into a representative agent—into one giant snowball—stops any meaningful analysis of interactions dead in its tracks.

But the use of a representative agent has been a part of post-Jevons neoclassical economics since the beginning …. because it is hard to harness the machinery of mathematical economics without it. If you are going to assume the homogeneity that is implicit in a rational expectations model, where everyone is acting in a lockstep, consistent manner using the same (correct, God-given) model, then you may as well collapse the world into one representative agent.

Computational Irreducibility: There’s No Formula for the Future

We face a changing world that, in turn, changes the context with which we view the world, and that changes us, again all the more so during periods of crisis. The critical implication is that we cannot plug numbers into a model and solve for the future. We cannot know where we will end up until we take the journey.

Emergence: There’s No Model Anyone Can Follow

We are all agents that interact within the world in nontrivial ways. Agents can change the environment, and other agents will change their actions as a result. Complex dynamics result from those interactions, which can lead to emergent phenomena, where we face the prospect of true surprises.

Due to the complexity of interactions, the behavior of the system cannot be constructed or construed as the aggregate of the models of the individual.

And it is not an issue of having limited information and thus having more error in an estimate, but still having that estimate be right on average. It is a matter of not having a clue of what the end result will be.

Non-Ergodicity: The Relevance of History, or Each Time It’s Different

Paul Samuelson … wrote that if economists hope to remove economics from the realm of history and move it into the “realm of science” we must impose the “ergodic hypothesis.” Ergodicity is at the core of economics because it is the only way we can apply probabilities in a sensible way.

When uncertainty is introduced to economic models, it is done with the formal machinery of probability theory. Probabilities are defined over a known set of possible events. But in the real world, it is not possible simply to assume that probability distributions are the right way to characterize, understand, or explain acts and decisions made under uncertainty.

when economic models (with the rational expectation hypothesis being a good case in point) assume that parameter values on average are the same for the future and the past, we are missing the essence of how our experiences and our continual accumulation of knowledge can change our view of the world, not just the probabilities but the structure and the sense of what is possible.

Keynes wrote, “Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world. It is compelled to be this, because, unlike the typical natural science, the material to which it is applied is, in too many respects, not homogeneous through time.” This is all the more so during crises.

Temporal instability is the bane of some of the most fundamental financial models. Take the capital asset pricing model—for which Sharpe shared the Nobel Prize. Fama and MacBeth (1973) and others report favorable estimates of the capital asset pricing model over a sample that runs until 1965, but it falls apart when the sample was updated to include the 1970s and 1980s. (The solution: add more variables.)

Heuristics: No Optimization, Just Simple Rules

How do we deal with this complexity and uncertainty? As unsatisfying as it may be, models have to be adjusted as often as the agents and the related environment and institutions adjust in unforeseen ways. One feature of the agent-based modeling approach is that making such adjustments is integral to the fabric of the model. This stands in contrast to the deductive approach, where changes at a minimum need to rework the intricately balanced mathematical structure, and at worst require resetting the underlying axioms.

Even economic models with asymmetric information or with the body English of behavioral assumptions do not accept the limits to knowledge that prevent us from laying out the model until it is run.

… our probability theory and statistics can be thrown out the window, because things are not ergodic and are not repeatable.

Neat, Plausible, and Wrong

Bookstabber argues that to understand crises, we must work within the limits of what it is to be human but more importantly, we must refute:

  • the use of mathematics because the essential problems are computationally irreducible;
  • the notion that we all can be represented by a proxy, which already should be refuted based on the failures of general equilibrium theory;
  • the notion of optimization and the maximization of utility, because of radical uncertainty and the need to use heuristics;
  • the notion of stable preferences, because people live in a non-ergodic world where they change based on their experiences;
  • models using predetermined probabilities, because past behavior cannot then provide a window into future behavior.

H. L. Mencken wrote that there is “always an easy solution to every human problem—neat, plausible, and wrong.” And neoclassical economics has been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a bridge between axiomatic, deductive models and the real world.

Bookstabber also rejects the argument that we behave “as if” we were working towards optimisation strategies

It deserves repeating that the use of heuristics and the deliberate limits on the use of information as employed in the Gigerenzer worldview are not part of an attempt at optimization, real or “as if.” It is not a matter of starting with optimization and, in some way, determining how to achieve something close to the mathematically optimal solution. It is a different route to decision making, one that, unfortunately for economists and mathematicians, is most likely the way people actually operate.

To understand and manage crises, economists must give up the myth that the economy is a simple mechanical equilibrium system.


Bookstabber argues that economic’s failings are central to its approach to crises.

“Economics operates within the deductive approach, patching up holes as they emerge, whether it be contradictions in utility theory or inconsistencies in the theory of rational choice, while still holding on to its core structure.”…

“This degenerative research program known as economics is crying out for a new paradigm to replace the prevailing one.” ….

“The answer is not more mathematics, more constrained optimization. It is understanding the essence of human behavior, the interactions and experiences, the phenomena that shape the world. And it is respecting these, understanding that they cannot be surmounted, that they must be addressed rather than assumed away.”


The philosopher of science Thomas Kuhn used the phrase “normal science” to describe the works that occurs within the framework provided by a paradigm

Normal science involves chipping away at the edges of the known world, adding pieces to the expanding puzzle and smoothing around the edges if a piece does not quite fit.”

“There comes a time, however, when normal science no longer works; problems emerge that cannot be solved within the current paradigm. This leads to a period of “crisis science.”

“A scientific revolution requires both a period of crisis science and a candidate for a new paradigm. The 2008 crisis may be such a stimulus. Agent-based modeling may be the new paradigm.”

9: Agent-Based Models

The new paradigm in Bookstabber’s view is an Agent-Based Model approach comprised of

  • Agents
  • Environment
  • Heuristics
  • Interactions
  • Dynamics

A crisis cannot be explained by making adjustments to models that are based on assumptions about the economy seeking equilibrium or by “fat tails” in a loss distribution…

A crisis is not something that is in the design of the model but emerges in a way that is not built into the model. The model has to break; it has to move into a realm that does not follow a predetermined path. Nor is a crisis simply a draw from a fat tail, a bad draw from the urn. It is a draw from a new urn, a new distribution—and one that cannot be pinned down if it is not ergodic, and cannot be predetermined if we have computational irreducibility.

if you want to understand crises, you have to develop a system that can create one. And to create one, you want to note and address these points:
• The dynamics might be computationally irreducible, so follow the paths rather than try to find a mathematical shortcut.
Context matters, so include the context. And context varies from one person to the next (remember, we are human), so allow for heterogeneity.
Interaction matters, so allow interaction.
We interact by means of heuristics, so have the agents’ actions allow for heuristics.
• The interactions can change the environment and lead to emergent phenomena, so be sure the model is constructed in such a way that emergence is not smothered and can be observed.

10: Agents in the Complexity Spectrum

Agent based modelling is designed to tackle the problem of complexity. In physics, engineering, and computer science, the measurement of complexity falls into one of three camps:

  • the amount of information required to describe the system,
  • the connectedness of its components, and
  • the effects of nonlinearity within the system.
Information and Complexity

A more complex system will be harder to describe or to reconstruct. Something is informationally irreducible if there is no theory or model more compact than the thing itself

Connectedness and Complexity

The concept of connectedness and complexity measures how one action can affect other elements of a system—that is, it measures the complexity of interactions.

Nonlinearity and Complexity

Nonlinear systems are complex because a change in one component can propagate through the system to lead to surprising and apparently disproportionate effects elsewhere…

The dominant … form of nonlinearity for human systems is not strictly found in the social, organisational, or legal norms we follow, or in how people behave in a given environment; it is in the complexity of the dynamic, or the feedback cycle between these two.

Another important point for finance that makes complexity different from its physical counterparts is that, in finance, complexity is often created for its own sake rather than as a side effect of engineering or societal progress.


the critical source of complexity we must address when we are operating in the human sphere, the one that comes from the nature of human interaction and experience: reflexivity.

History is rife with examples of bank runs precipitated by depositors’ or investors’ beliefs, whether unfounded or not, that an institution might be in trouble. In 2008, the boss of Lehman Brothers, Richard Fuld, blamed his firm’s implosion on those circumstances. In 1907, fear that the Knickerbocker Trust was foundering caused the entire financial system to become unglued until J. P. Morgan gathered enough capital to hold it together.

Agent-based models operate in a reflexive world … The environment includes the effect of each agent’s action on the other agents. The relationship between reflexivity and agent-based modeling is readily seen by comparing
the characteristics of an agent-based model from chapter 9 (agents, environment, heuristics, interactions, and dynamics) with this list by Eric Beinhocker of the essential elements for a model to incorporate reflexivity:
• There is an environment in which agents are each pursuing some objective.
• The agents interact with the environment, and they can alter the environment by their actions through what Soros calls a manipulative function.
• The agents have what Soros calls a cognitive function through which they observe the environment and reassess their position relative to their objective.
• Each agent has a model that relates and coordinates the cognitive and manipulative functions; they have an understanding of the way they alter the environment that relates to their objectives.

Beinhocker notes that although these elements are necessary for reflexivity, they are not sufficient. We need two additional elements, which add further intricacy to the relationship of agent-based models and reflexivity:
• The agent operates in a complex environment. The complexity in the model comes from two directions: the interactions between the various agents, and the nonlinear feedback of the system.
• The heuristic that drives the actions of the agent can change in response to interactions between the agent and its environment.
Thus, there is a feedback between the perception of the environment and the agent’s internal model.


Complexity in the information battlefield, the wilful creation of complexity – complexity that is peculiarly human in origin- and the resulting endogenous uncertainty all spell trouble for the foolhardy armed only with the standard model that dominates today. Yet they keep applying it, even to the point of approaching economics as if it were a branch of physics.

Bookstabber offers examples drawn from the world of finance to illustrate his point.

… markets are not physical systems guided by timeless and universal laws. Markets are systems based on gaining an informational advantage, on gaming, on action and strategic reaction, and this a space that can never be expressed with a fully specified model or with well-delineated possibilities.

Risk management models offer more examples. Once a trader understands how the model works they will look for ways to trade that are not measured. If a model measures interest rate risk then bet on the shape of the yield curve in an interest rate risk neutral way. In a nuclear plant, you can install an improved valve and the valve will not try to figure out how to make you think it is closed when it is really open.

Complexity and Radical Uncertainty

We can move from simple mechanical systems to complex systems of nonlinear feedback, to complex adaptive systems, where the agents alter their behavior based on changes in the environment, to complex reflexive systems, where the feedback goes in both directions—that is, where the environment changes the behavior of the agents, and where the behavior of the agents also changes the environment—and finally to strategic complexity generated by adversarial thinking such as that manifested by Boyd’s OODA loop for tactics in warfare. So we have a spectrum of complexity along the lines of that in figure 10.1. Reflexivity lies near the end of the spectrum because it deals with the interactions taking place in observing the environment, taking action, and regrouping based on the resulting changes in the environment and the experiences that ensue. Even further along the spectrum is strategic complexity. This is the complexity that is created deliberately as in the case of war.

Computational irreducibility, emergent phenomena, and non-ergodicity will all become more manifest as we move along the spectrum. They are, so to speak, side effects of complexity. We might likewise think of radical uncertainty as a side effect of complexity, so that the further we move along the spectrum the more likely radical uncertainty will become manifest in the dynamics. But given its ethereal nature, we can think of radical uncertainty in other ways as well. We can also think of radical uncertainty as a type of complexity. From the standpoint of computational irreducibility, we might look at radical uncertainty as occupying a position that is beyond the edge of the complexity spectrum, at least beyond the “visible” complexity spectrum that we can observe and analyze. Radical uncertainty is the dark matter of complexity.

Complexity and the End of Theory

This is the nature of economic theory: axioms that are extended out using deductive methods to present a general mathematical approach that attempts to mimic the methods of physics, methods for a mechanical world. And this is where economic theory runs aground: when the world of complexity intrudes and we can’t reasonably use the simplifying assumptions the theory requires; we can’t reasonably view the world as mechanistic, timeless, and universal; and we can’t aggregate everything into a representative agent.

Complexity does intrude. As we march down the path of increasing complexity, one by one the pillars supporting neoclassical economic theory give way. … The deductive, axiomatic methods that underlie neoclassical economics decline in relevance. As we embrace complexity we come to the end of theory.

Section IV: Agent-Based Models for Financial Crises

11: The Structure of the Financial System: Agents and the Environment

My objective in the next chapters … is to show how I think about the financial system from a practical standpoint using an agent-based approach.

I will present the makings of an agent-based model that incorporates the critical components of concentration, leverage, and liquidity. But the first ingredient in an agent-based model is the environment in which the agents interact, and which they affect by their actions.

The Environment

when we get to the modeling of financial crises, it all has to do with plumbing. If you want to follow a crisis, you need to follow the money, and the money flows through the plumbing.

We need to get ourselves a schematic of this plumbing and start from there. For an agent-based approach, or for that matter most any simulation approach, the key is setting it up correctly, which means understanding the structure: where the agents are operating, their environment, their heuristics, and the resulting dynamics, all specific to the financial system.

In this section, Bookstabber starts with a simple 1 layer diagram and then moves to a multilayer exploded view of the financial system which he uses to discuss vulnerabilities that are not apparent in more simple descriptions of the financial system.


Bookstabber uses the example of a black out in the Italian power system to illustrate some of his ideas

the links between layers lead to a different functional effect, and lead to a transformation from one flow to another. Reformulated in the context of the financial system, risks in one activity become risks to other activities as they propagate and amplify within and, more important, between the different layers. Funding becomes the fuel for acquiring assets, and the assets become the source of collateral, which in turn is the basis for generating the funding. It is not surprising that the addition of these functional relationships and transformations in the types of flows that arise for interlayer connections create vulnerabilities that do not exist in a plain vanilla, single-layer network.

Breaking up the banks is not enough if they are still systematically inter connected

Going from a small set of big banks to a larger set of small banks will not do the trick if those smaller banks still span the layers. There will be many smaller shafts to spread the fire from one layer to the next. Rather, the banks need to be broken up to create firewalls between the layers. We cannot have the full-service banks that provide collateral, funding, and market making all within one structure, no matter what their size.

The Agents and Their Heuristics

Bookstabber’s agent-based approach starts, naturally enough, with agents; e.g. Bank/ Dealers, Hedge Funds, Cash Providers, Securities Lenders, Institutional Investors.

His agent based model also highlight two aspects of the market that are critical for crises. One is that banks are multifaceted and can come at the market in a number of ways. Another is the flow of funding from the cash providers through the bank to the users.

Directly or indirectly, nearly all the functions in the financial system draw on funding, and on collateral as well, because in the case of secured funding, the pathways are two-way streets: when there is funding in one direction, there is a flow of collateral in the other. This is why the interest rate and credit markets are so critical to crises. We have seen twenty-point down days for equities without much in the way of long-term effects. The same is not true for the markets that are integral to funding.

Each agent has its own set of heuristics, understanding these heuristics is an important part of understanding the system.

The actions of the agents based on their heuristics can be broken out into Soros’s cognitive function, where the agent takes in the environment (from the world to the mind), and the manipulative function, where the agent executes on its decisions and alters the environment on that basis (from the mind to the world).


Having laid out the environment through the system map and the agents of the system, the agent based model explores how the dynamics unfold.

In market crises there are two types of dynamics: asset-based fire sales and funding runs, or funding-based fire sales. And each feeds into the other. Asset-based fire sales focus on the interaction between institutional investors, particularly leveraged investment firms such as hedge funds; their funding sources, notably the bank/ dealer’s prime broker; and the asset markets where the forced sales occur….

Funding-based fire sales focus on the interaction of the bank/ dealer with its cash providers.

These dynamics are driven by the anvil and hammer of leverage and liquidity. Sudden redemptions can also do the job, but usually it is the effect of leverage that forces selling. And it is illiquidity that causes prices to drop, creating the cascades that turn a simple, one-time shock into something far worse. If the markets are highly liquid, selling can be accommodated without a notable price decline, and there will be no cascades or contagion. If there is little leverage in the market, there will not be a need for substantial deleveraging even in the face of a market event.

Because forced deleveraging leads to asset sales, and asset sales lead to price declines (which then leads to further deleveraging), both leverage and illiquidity are essential components of a market crisis.

12: Liquidity and Crashes

The principal reason that prices vary, especially in the short term, is the demand for liquidity that results from our apparent fickleness. If you want to buy or sell a stock—or if you have to—you are a liquidity demander. And what you are demanding when you demand liquidity is to do this quickly, and with low transaction costs. It is in the froth of liquidity where most trading profits are made. Not only does the demand for liquidity move prices, but the breakdown of liquidity is one of the primary drivers of crashes, as well. I assign the dynamics of liquidity to three types of agents: liquidity demanders, liquidity suppliers, and market makers.

Bookstabber uses two examples of dislocation in major markets to illustrate his thesis. First he describes the market environment

The Environment

The financial environment includes a means of egress for the financial markets: liquidity. Liquidity is a door that can yawn wide open. Or it can close shut. And, consistent with the dictum that the markets operate to extract the maximum pain, when investors really want to get out, that is what happens. And when investors need to get out but cannot do so readily, they need to get out all the more. And the door shuts tighter.


Bookstabber here focuses on the role of “portfolio insurance” hedging strategy that was popular at the time and indeed one that he helped to popularise while he worked at Morgan Stanley. He argues that the 20% drop in the market was the result of a portfolio strategy that sucked the liquidity from the market. Like many hedging strategies, it relies on liquidity in the market that may not be available when it is really required. One 0f the root causes was that the futures market was operating (ironically?) on a much shorter time frame than the equity investors.

Liquidity dried up because of the difference in time frames between the demanders and the suppliers. By the time equity investors could have reacted and done some bargain hunting, the specialists had moved prices so precipitously that these potential liquidity suppliers were scared away. The key culprit was the different trading time frames between the demanders and the suppliers.


The Flash Crash was another crisis of liquidity, one in which liquidity demand came in faster than the supply—the same sort of time disintermediation as occurred in 1987. The difference was in the available computer power. With the wonders of high-frequency trading, this crash occurred in minutes rather than over the course of hours.

One of the reasons Bookstabber finds to explain the market disruptions seems somewhat banal in hindsight

One pretty simple take—one that didn’t get highlighted in the final report—is that the problems all started with the SEC’s rule for decimalization of the equity markets in 2001, and reminded me of the game show The Price Is Right.


Bookstabber uses game play from a TV show called “The Price is Right” to illustrate the economics of bid-offer spreads in markets.

in the equity markets. There is a notion of a reasonable spread, related to price volatility and liquidity. The volatility measures the risk something bad will happen before the trader can unload his inventory, and the liquidity determines the time it will take to unload. If a trader is willing to put up a good size order at this spread, but the trading increment is less than that, he can end up much like the first contestant in The Price Is Right. He might be bidding with a ten-cent spread, only to see someone step in front of him at nine cents.

Wall Street gets a bit more complicated than The Price Is Right, because if things go badly for a trader sitting at $. 09, he can lean on the person bidding at the $. 10 spread, turning around to put his position to him. Again, this will occur all the way down the line, but only if the trader at the tighter spread has a smaller position than the guy he is leaning on. So the bid size will be smaller and smaller as you run down the increments from $. 10 to $. 01. The increments will be smaller, but the need for speed will be greater.

The point is that with decimalization we end up with a smaller order book. The person who normally would be willing to trade in size at a reasonable spread pulls away, and the traders down the line need to get faster and faster trading platforms.

So how do we get from The Price Is Right to a smaller order book to the stock of Procter & Gamble sinking from $ 60 to $ 40, or Accenture nearly evaporating as it fell from more than $ 40 to one penny a share—in one minute?

The Flash Crash was also the result of preprogrammed selling, this time due to old-fashioned market stop loss orders. In 1987, the specialists were not well capitalized and fled in the face of the onslaught of sell orders. In 2010, no market makers were even expected to stand in front of the selling, and because of decimalization, there was a thin order book. One way to scare up orders was to drop prices precipitously. This then fed back into the computers, which calculated a new, increased hedge—hit further market order stops, so to speak—which then spit out further sell orders to the futures, and the cycle went one more turn. No one could step back and say, “Wait a minute, let’s think about this for a bit.” That is, it was a tightly coupled process.

The Agents and Their Heuristics

So, whether it is 1987 or 2010, we get a similar sequence of events. There is an initial shock and preprogrammed selling gets triggered in the face of that decline.

Leverage and margin calls are well known causes of market crashes but Bookstabber explores alternative paths. In particular the role of liquidity which will be different for different markets and different types of goods. Dropping the price of a good may not automatically lead to increased sales, sometimes it just takes time.

… that is the measure of liquidity, how long it generally takes before a prospective buyer walks in the door after you have discounted the price to make things move. This is a pretty simple point, but it is at the core of why we end up with even the most sophisticated of investors and traders creating crises.

At the core of these two cases are three key types of agents in relation to liquidity:

Liquidity demanders that need to buy or sell. They have a need for immediacy; they care more about time than price, and are willing to move the market price to meet their needs. When the market is in crisis mode, they are forced to sell in order to raise cash, tamp down risk, or meet margin calls.

Liquidity suppliers that try to profit from the demanders. They seek to meet the liquidity demand—for a price. There are some short-term liquidity suppliers, such as hedge funds and other speculators, though sometimes they end up falling into the liquidity demand group. If the liquidity demand is very high, the deep pockets to take the other side belong to investors who have a longer time frame—the asset managers, pension funds, and sovereign wealth funds.

Between the two is the market maker. The market maker is the transaction intermediary, the broker, moving the price based on the liquidity demander’s needs in order to attract the appropriate amount of liquidity supply. Market makers trade with a very short horizon. They don’t want to take on risk; they want to buy in one instant and off-load their position in the next. They make their money off the bid-offer spread and from stockpiling positions in the face of demand by their clients (also called front running, which is allowed in many markets).

Modeling liquidity during market crises is difficult because of the complex, nonlinear dynamics of market participants interacting—

Part of the complexity arises from the feedback that occurs during periods of market shock and illiquidity. Low liquidity coupled with the forced sales of leveraged investors can cause further drops in price, leading to the cascades and contagion of fire sales, and further sapping liquidity from the market. Thus liquidity becomes an endogenous element of the broader market behavior, and cannot be analyzed without attention to issues of leverage, funding, and the resulting changes in investor behavior.

The complexity also arises from the heterogeneous decision cycles among those in the market, specifically the difference in time frames between the liquidity demanders, who require immediacy, and the liquidity suppliers, who aren’t in such a big hurry.

A sudden price dislocation can lead to forced selling due to margin calls, redemptions, programmatic selling algorithms such as portfolio insurance, stop losses, or risk parity strategies and other pressures. 5 Such liquidity demanders exhibit greater immediacy. That is, the time frame of the liquidity demanders shrinks, and there is more of a focus on the speed of liquidation than on price. On the other hand, those in a position to supply liquidity aren’t under the gun, and continue to be price sensitive; more critically, they don’t share the same short-term focus. Many liquidity suppliers are not even monitoring the markets with an eye to trade frequently.

Compounding the time horizon issues of liquidity demanders and suppliers are the impacts that inventory constraints can have.

Summing up, to understand a market crisis we need to understand the ways in which the demanders and suppliers of liquidity operate:
– what are their decision cycles;
– how much are they affected by market dislocations; and
– how critical is the market under stress to their portfolio adjustments?

The Denouement: The Crisis of 2008 and the Threat from Illiquidity

Bookstabber discusses the ways in which the responses to the GFC have reduced liquidity in the market. It is interesting to read the issues he raises below in the context of the Fed’s response to the COVID19 crisis. In particular the way in which the Fed has chosen to provide liquidity to the bond market in ways that were previously the province of private sector market makers.

Leverage was pinpointed as the prime mover in 2008, and it remains the focus of academic research and regulatory oversight. But I don’t think it will be leverage the next time around. Regulation has moved the risk to liquidity, largely by what it has done to the banks, the key agents involved in market making for the critical rates and credit markets. First, the reforms that came out of the Basel Committee on Banking Supervision restrict the amount of leverage banks can carry. This reduces their capacity to act as market makers. Second, the Volcker Rule, which is part of the Dodd-Frank Act, reduces the incentives for the banks to provide market making.

Boostabber argues that reduced market liquidity is the collateral damage associated with the efforts to make the banking system stronger

I was instrumental in the development of the Volcker Rule, pushing for what is called the market-making exception that allows the banks still to make markets, and I think it or something like it is critical to maintaining financial stability. But like most regulation, especially complex regulations, it comes with unexpected collateral damage. And in this case the damage is to reduce market liquidity.

With this shift from leverage too liquidity, risk has moved into the shadows. Leverage is observable …. But liquidity—that is, the liquidity that really matters, the liquidity that is available when everyone seems to be rushing to sell—is difficult to assess. Most research on asset liquidity focuses on day-to-day market functioning during noncrisis periods, employing measures based on data such as bid-offer spreads and daily volumes drawn from these typical market periods. Such data provide limited insight into the large liquidations during periods of sharp price declines and related fire sale dynamics.

13: The 2008 Crisis with an Agent-Based View

This chapter reiterates Bookstabber’s view that agent based modelling is the approach to use.

Cascades, Contagion, and Marks

In the 2008 crisis, the drop in prices and further forced selling did not come from market activity. Prices were marked down by counterparties to levels that bled out key firms and weakened the market, all without any actual selling. The assets were not very liquid, and as things gathered steam, they became less liquid still. There was no market to facilitate what economists call price discovery. Which meant you were at the mercy of the firm lending to you. It was judge and jury; it could pretty much make things up, assert a very low price for the collateral, instigating a margin call, and there was little you could do about it beyond complain and pay up.

This mark-to-market point is a critical one for understanding the development of the crisis.

Section V: The End of Theory

14: Is it a Number of a Story? Model as Narrative

Bookstabber uses the analogy of writing a book to identify two alternative ways of building a model.

One is to map out the world in a mechanistic way and have a model that (you hope) will predict the course of the story from start to finish. The other is to build a model that is flexible and unanchored, that adjusts with each unexpected twits in the road.

He argues that economic models are not especially useful because they do not offer useful insights into how crises unfold. However, it is not just about building a different type of model, he is also proposing a radical change in the way that the model is used. Based on his experience, he argues it is better to try to develop narratives that explain what is going on; a story line of sorts with the various characters working through that story line. As the crises evolves we seek to apply a narrative to the events we observe and to develop a good, supportable plot line going forward.

The effective model is a tool for developing these qualitative narratives, not a machine that spits out numbers.. In the narrative mode, the agent-based model conforms with the investment approach to decision making and deliberations during crisis events.

The point isn’t to crank out and act on a number. It is to set up a model to see what light can be shed on a real world problem. and to see if it can fit into a larger, intuitive narrative about what is going on.

If You Can Model It, You’re Wrong

I am not proposing a specific model to deal with crises. I am putting forward a process, a paradigm that, at any point in time, will be manifest as a model for that period. But as the crisis unfolds and components of radical uncertainty appear, the model will shift with the emerging reality

The Model Builder as Pragmatist

Boostabber describes his approach as pragmatic in contrast to theoretical or axiomatic approaches and refers to Thomas Kuhn’s (1962) concept of “normal science” where the aim is to work within the existing paradigm while refining and extending it. This process has played out over many centuries.

The paradigms we have in science now are not closer than earlier paradigms to an ideal or perfect paradigm; science is not heading toward a final paradigm that is superior to all others… science is pragmatic.

Pragmatic thinking employs any number of tools rather than putting all our chips on the “combined assumptions of maximising behaviour, market equilibrium, and stable preferences …”


Deduction starts with general propositions or axioms, and applies logic and mathematics to derive principles from them. …

Induction looks at specific cases, and generalises from them. The end result … can be the formulation of a deductive argument, but more generally induction is like learning from experience. Whereas the deductive approach is built on mathematics and formal logic, the inductive approach is more of a hodgepodge, a broadly interdisciplinary approach that might make use of statistics and mathematics, might get its bearing through historical study and literature, or might use heuristics that are only loosely tied to observation and past experience.

The critical distinction between the deductive an inductive approach is that deduction can solve a problem, in the sense that you know the result based on the mathematical approach or applying the results of a formula. Deductions leads to general solutions, to mathematical shortcuts for solving a problem. With induction you do not (and perhaps cannot) take such a shortcut; you can only know the result by seeing through to the end.

Bookstabber argues that life is inductive and the agent-based modelling approach he has come to favour is also inductive.

With an inductive approach there is continued revision of the model, not just revision of the estimates for the parameters, but the nature of of the heuristics and the network of interactions … while for a deductive approach revisions suggest that either the axioms or their implications manifest through the mathematical development are wrong.


Bookstabber represents the economic approach as being an inherently deductive one built on axioms that provide a truth that applies to markets and economic beings at any time and any place. The inductive approach he proposes is in his words “as much an exercise in pasting together a story line as it is developing a theoretical structure”


Bookstabber considers what he believes is a fundamental disconnect between the world of economics and the real world

If the human world is fraught with inconsistency and indeterminancy, why should we se models from the natural realm that are designed to be rigorous and all-encompassing? An economist might argue that we all have to make simplifying assumptions in order to create rigorous, mathematical models. But then we must ask why these models should be constructed in this manner, and whether they apply or are simply an intellectual exercise that reaches no further than a cartoonish, two-dimensional world.

15: Conclusion

Bookstabber concludes with a call to arms make use of Agent-based modelling. He concedes that the models are not a panacea but they are, in his view, and improvement over the status quo of models built on axioms and equilibriums.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: