The short version
Three main propositions run through this book.
First, the world of economics, business and finance is ‘non-stationary’–it is not governed by unchanging scientific laws.
Second, individuals cannot and do not optimise; nor are they irrational victims of ‘biases’ which describe the ways they deviate from ‘rational’ behaviour. Kay and King distinguish axiomatic rationality, as used by economists, from evolutionary rationality, as practised by people. They argue that many so-called ‘biases’ are better interpreted as logical responses to the complex world of radical uncertainty.
Third, they note that humans are social animals and that communication plays an important role in decision-making. This ties in with the importance that “narratives” play in their account of how people actually navigate the uncertainties of the real world.
The extracts below are organised by chapter. They represent notes I took while reading to serve as a tool to reflect on what the book is saying. In places I have used bold text to highlight passages that capture key points. The extracts and the highlights reflect what I found interesting but they may well have missed areas that other readers will find important or useful. I recommend you read the book but the extracts will give a sense of the Authors’ message.
The long version
John Kay and Mervyn King cover a lot of ground but, simply put, they present their book as being …
“… about how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.”
One of the things that makes the book interesting is that they confess that they were once true believers in the rational economic agent value optimisation decision making model.
As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.
The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.
… they continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.
Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.
They note evidence that economic agents don’t appear to pursue optimisation strategies in practice
Business people, policy-makers and families could not even imagine having the information needed to determine the actions that would maximise shareholder value, social welfare or household utility. Or to know whether they had succeeded in doing so after the event. Honest and capable executives and politicians, of which there are many, try instead to make incremental decisions which they think will improve their business, or make the world a better place.
They acknowledge that few if any economists believe their models are an accurate description of how the real world works. The issue is whether this is a useful way to model and analyse economic outcomes; i.e. do economic agents (consumers, employees, businesses etc) collectively behave as if they they were economically rational and seeking to maximise expected utility.
Most economists would readily acknowledge that no one actually engages in the kinds of calculation which are described in economic models. But since the work of Paul Samuelson, economists have relied on the claim that if people observed certain axioms which constituted ‘rationality’ they would – unconsciously – be optimising,
Kay and Kay argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty
But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.
Kay and King argue that value maximisation and optimisation strategies are not in fact rational because real people make choices that reflect the limits of what can be known or quantified as “risk”
Part I: Introduction: The Nature of Uncertainty
1 The Unknowable Future
One of the recurring themes in this book is the way in which probabilities are increasingly used in ways that that disguise uncertainty as opposed to actually providing useful information about the choices that individuals, businesses, organisations and governments make in navigating the real world.
In the run-up to the global financial crisis, as in the preparations for the Abbottabad raid, policy advisers not only chose to express uncertainties as probabilities but were required to do so. Regulators prescribed the risk models used by financial institutions; Congress insisted on the quantification of judgements based on intelligence reports. In both finance and politics, this expression of uncertainty was at best unhelpful and at worst actively misleading.
They argue that it is better to step back and ask “what is going on here?”
The question ‘What is going on here?’ sounds banal, but it is not. In our careers we have seen repeatedly how people immersed in technicalities, engaged in day-to-day preoccupations, have failed to stand back and ask, ‘What is going on here?’
The question of how to distinguish risk and uncertainty is an issue with a long history and contributions from distinguished mathematicians and economists. Kay and King argue that the original answers were better than the ones that have come to dominate economics and finance since the mid 20th century.
Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not. They had already adopted mathematical techniques which gave the term ‘risk’ a different meaning from that of everyday usage. In this book we will describe the considerable confusion and economic damage which has arisen as a result of the failure to recognise that the terms ‘risk’, ‘uncertainty’ and ‘rationality’ have acquired technical meanings in economics which do not correspond to the everyday use of these words. And over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.
Radical uncertainty as they conceive it is related to, but broader than, Taleb’s Black Swans.
we are not only talking about the ‘black swans’ identified by Nassim Nicholas Taleb–surprising events which no one could have anticipated until they happen, although these ‘black swans’ are examples of radical uncertainty. We are emphasising the vast range of possibilities that lie in between the world of unlikely events which can nevertheless be described with the aid of probability distributions, and the world of the unimaginable.
Keynes and Knight, they argue, were on the right side of a proper understanding of uncertainty but lost the debate to other theorists who wanted to roll back the boundary between risk and uncertainty
But Keynes and Knight lost the battle to put radical uncertainty at the heart of economic analysis … instead of recognising radical uncertainty, and adopting policies and strategies that will be robust to many alternative futures, banks and businesses are run with reliance on models which claim knowledge of the future that we do not have and never could have.
2 Puzzles and Mysteries
The idea of radical uncertainty is similar to the distinction between “Known” and “Unknown” Unknowns but Kay and King prefer to distinguish “puzzles” and “mysteries”.
Other writers have made similar distinctions to the ‘known’ and ‘unknown” unknowns described by Donald Rumsfeld. Greg Treverton, chairman of President Obama’s National Intelligence Council and for many years a senior figure in the US intelligence community, stressed the difference between ‘puzzles and mysteries’. A puzzle has well-defined rules and a single solution, and we know when we have reached that solution.
Mysteries offer no such clarity of definition, and no objectively correct solution: they are imbued with vagueness and indeterminacy. We approach mysteries by asking ‘What is going on here?’, and recognise that even afterwards our understanding is likely to be only partial. They provide none of the comfort and pleasure of reaching the ‘right’ answer.
3 Radical Uncertainty is Everywhere
This chapter explores the forces that explain why the real world is characterised by radical uncertainty and why economic agents do not seek to optimise.
The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers …
Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.
There are clearly areas in which there are known facts or at least outcomes that are the product of a stationary probability distribution (tossing a fair coin or other games of chance. At the other end of the spectrum there “unknown unknowns” or Taleb’s “Black Swans”.
True “black swans” are states of the world to which we cannot attach probabilities because we cannot conceive of these states.
To describe catastrophic pandemic or environmental disasters, or nuclear annihilation, or our subjection to robots, in terms of probabilities is to mislead ourselves and others. We can talk only in terms of stories. And when our world ends, it will likely be the result not of some ‘long tail’ event arising from a low-probability outcome from a known frequency distribution … but as a result of some contingency we have failed even to imagine.
Through a glass, darkly
Given the boundaries of what we can know about the future, how do economic agents react
Real households, real businesses and real governments do not optimise; they cope. They make decisions incrementally. They do not attain the highest point on the landscape, they seek only a higher place than the one they occupy now. They try to find outcomes that are better and avoid outcomes that are worse.
The limits of probabilistic thinking are obvious which begs the question why has the distinction between risk and uncertainty articulated by Keynes and Knight been ignored.
Why has this seemingly obvious critique been so widely ignored? The hegemony of optimisation as the goal of decision-making is made possible by ignoring radical uncertainty. Building on the success of probabilistic reasoning in illuminating games of chance, the approach of decision theory bifurcates uncertainty into the unknown and unknowable, and the unknown but capable of being characterised by a known probability distribution. The practitioners of this approach wash their hands of the former …
It is easy to understand why economists and statisticians, in search of clear and comprehensive solutions, have sought wide extension of the scope of probabilistic reasoning. The underlying mathematics has a certain simplicity and beauty, and in practice can be applied by those who have acquired the requisite modest technical skill.
Probabilistic reasoning may appear beautiful and appealing, but sadly its applicability to real-world problems is limited.
Part II: The Lure of Probabilities
4 Thinking with Probabilities
This chapter explores the history of the rise of probabilistic thinking. The first formal analysis of probability is found in correspondence (1653-54) between Blaise Pascal and Pierre de Fermat on a gambling problem posed by a French noble. The equally interesting question (well to me anyway) is why it took so long for probability to be formalised when we had pretty good mathematicians and gamblers at least as far back as Classical Athens.
Historians of mathematics have speculated on why the discoveries of Pascal and Fermat came so late in the history of human thought. Some of the finest and most original mathematicians ever known lived in classical Athens. And Athenians gambled. Why did they not succeed in relating their mathematical skill to their common pastime? After all, as mathematics goes, the theory of probability is not very difficult.
Plato sought and found truth in logic; for him there was a sharp distinction between truth, which was axiomatic, and probability, which was merely the opinion of man. In premodern thought there was no such thing as randomness, since the course of events reflected the will of the gods, which was determinate if not fully known. The means of resolving uncertainty was not to be found in mathematics, but in a better appreciation of the will of the gods.
The formalisation of the concept of probability in the 17 Century was also occurring at a time when the value of experiment and discovery of facts was taking over from a reliance on arguments based on the authority of the person asserting their concept of what was true.
When the Royal Society, Britain’s premier scientific body, was founded in 1660, it took as its motto ‘nullius in verba’–today translated informally as ‘take nobody’s word for it’–a forceful assertion of the primacy of experiment and discovery over argument from authority.
Around this time we also see the emergence of the concept of frequency distributions
Abraham de Moivre, another French mathematician, developed the mathematics of games of chance pioneered by Pascal and Fermat. … De Moivre … posed the question ‘What would be the frequency distribution of the outcomes of many games of chance?’ For example, suppose you tossed a fair coin a thousand times. On average, you would expect 500 heads. But you would rarely see exactly 500 heads. What was the probability of 499, or 510? The numerical answer was, de Moivre showed, described by a bell-shaped curve now known as the normal distribution.
Another notable episode in the theory of probability was contributed to the Reverend Thomas Bayes whose theorem enables us to calculate conditional probabilities: i.e. what is the probability that A will happen given that B has happened. A core principle in probability theory is that the sum of the probability of all possible events must add up to 1. But in a Radically Uncertain world, we are unable to assign meaningful probabilities to all possible events.
… what if, in a radically uncertain world, we are unable to describe all possible events, far less judge their relative probabilities? In subsequent chapters, we will show how significant this problem is for the wide application of probabilistic thinking.
My personal view is that the critique of Bayesian statistics is a little harsh. I get their argument that Bayesian Statistics does not really address the problem of radical uncertainty but it still seems like a pretty useful model when applied within its zone of validity. If your only familiarity with the approach came from this book you could be forgiven for thinking that it is just rubbish. Kay and King concede that probability s useful within its zone of validity and I suspect would be willing to concede some utility to Bayesian decision making if challenged. If you are interested then I can recommend “The Theory That Would Not Die” as a good introduction to this approach.
5 A Forgotten Dispute
The debate over the boundary between risk and uncertainty is not new, it has been the subject of active debate from the earliest days of probabilistic thinking.
From the earliest days of probabilistic thinking, attempts were made to apply such reasoning outside the domain of the observable frequencies of games of chance and human mortality, to use probabilistic language and mathematics in the description of unique events such as the Yucatán asteroid or the bin Laden raid. And from the earliest days of probabilistic thinking, such extension was resisted. Opponents of extension for long had the upper hand. In his 1843 System of Logic, the British philosopher John Stuart Mill criticised the French mathematician Pierre-Simon Laplace for applying probability theory ‘to things of which we are completely ignorant’. Another French mathematician, Joseph Bertrand, went further. He lambasted his countrymen for making absurd assumptions in the application of probabilities to problems outside the domain of games of chance.
However Kay and King record that the pressure to extend its domain became increasingly hard to resist and the “solution” was found in statements of subjective or personal probability.
In this book we will use the term subjective probabilities throughout. The adjectives ‘subjective’ or ‘personal’ acknowledge that the assessment is not objective but a matter of individual judgement, and that different people may attach different probabilities to the same past, present or future event, both before and after it has occurred.
The debate reignited in the inter war period of the early 20 Century. John Maynard Keynes and Frank Knight argued for the continued importance of the distinction between risk and uncertainty and denied that probabilities could be applied outside the realm of known or knowable frequency distributions such as games of roulette or observations of mortality or weather. Frank Ramsey took a different view
Although notions of personal probability had been implicit for many years, Ramsey was the first to describe ‘subjective probability’ in a more formal way. Ramsey further proposed that the mathematics which had been used for the analysis of probabilities based on frequencies could be applied to these subjective probabilities. Similar analysis was developed independently by Bruno de Finetti,
Keynes and Knight lost the debate
Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions.
Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. The University of Chicago was the focus for much of this work
Many great economists contributed to the creation of the Chicago School, but the figure best known to a wider public was Milton Friedman, Professor of Economics from 1946 to 1977 and one of the most influential economists of the twentieth century. Friedman’s Price Theory–a Provisional Text may be regarded as the primer of the doctrines of the Chicago School. In it he wrote:
“in his seminal work, Frank Knight drew a sharp distinction between risk, as referring to events subject to a known or knowable probability distribution, and uncertainty, as referring to events for which it was not possible to specify numerical probabilities. I’ve not referred to this distinction because I do not believe it is valid . . . We may treat people as if they assigned numerical probabilities to every conceivable event.“
Kay and King note that the argument for subjective probability is based in part on the idea that gambling is an expression of subjective probability
The idea–which has been present, often implicitly and sometimes explicitly, since subjective probabilities were first employed–is that an observer can deduce subjective probabilities by presenting people with the opportunity to bet on various outcomes. The term ‘pignistic probability’ was coined by Philippe Smets to describe the process of claiming to deduce subjective probabilities from observed gambling behaviour.
… but Kay and King do not buy the argument
The notion that observing how people gamble gives insight into rational behaviour under uncertainty is strange. After all, bookmakers and casinos consistently make money at the expense of their customers.
… when intelligent investors do buy or sell, they require what Benjamin Graham christened a ‘margin of safety’ between price and their assessment of value.
Once it is acknowledged that most people do not bet on most things, and that it is simply not true that everyone would be willing to take one side or another of any proposed lottery, the argument deployed by Ramsey against Keynes simply fails. In a world of radical uncertainty, most people do not choose among lotteries, far less enter them, and for good reasons.
6 Ambiguity and Vagueness
Kay and King note that part of the appeal of probabilistic thinking is that fits neatly with the way in which economics has embraced mathematical reasoning in the pursuit of rigour and clarity.
The belief that mathematical reasoning is more rigorous and precise than verbal reasoning, which is thought to be susceptible to vagueness and ambiguity, is pervasive in economics. In a celebrated attack on Nobel Prize winner and New York Times columnist Paul Krugman, the Chicago economist John Cochrane wrote, ‘Math in economics serves to keep the logic straight, to make sure that the “then” really does follow the “if,” which it so frequently does not if you just write prose.’ But there is a difficulty here which appears to be much more serious in economics than it is in natural sciences: that of relating variables which are written down and manipulated in mathematical models to things that can be identified and measured in the real world. This is an aspect–perhaps the principal aspect–of a problem which Paul Romer, 2018 Nobel laureate, has described as ‘mathiness’.
They argue that part of the problem with using math is that, while numbers are essential to economic analysis, the economic data and economic models are never descriptive of ‘the world as it really is’. Economic interpretation is always the product of a social context or theory. That does not invalidate this form of analysis completely but it does highlight the importance of understanding the limitations of the models and the data they work with.
They also maintain that it is important to recognise that people tend to express probability in real life as a rank ordering rather than a precise number
… descriptions of likelihood, confidence, surprise and certainty are often used in everyday language, interchangeably with probabilities. But they are not probabilities. They give us rankings, representing an ordering rather than a numerical scale.
Some aspects of the real world are inherently vague or ambiguous. Different people may reasonably arrive at different interpretations of an event.
Kay and King argue that whether due to vagueness or ambiguity …
… it is impossible to converse intelligently about subjective probabilites in the absence of a shared understanding of the state of the world under discussion and the language with which it is described
Behind these efforts to escape radical uncertainty is the belief that there is a scientific truth–a description of ‘the world as it really is’–waiting to be discovered as new information gradually becomes available. Data can help us update an initial, or prior, probability distribution into a new ‘posterior’ probability distribution. But that prior distribution is by its nature subjective; so, therefore, must be the posterior distribution.
Probabilities are used today in economic, scientific and general conversation. Yet radical uncertainty is not well captured by probabilistic reasoning. As Keynes put it: ‘It is difficult to find an intelligible account of the meaning of “probability”, or of how we are ever to determine the probability of any particular proposition; and yet treatises on the subject profess to arrive at complicated results of the greatest precision and the most profound practical importance.
7 Probability and Optimisation
Probabilistic decision making also relies on the concept of expected value and, in particular, the idea that it is rational to seek to maximise (or optimise) expected value.
From the beginnings of probability theory, mathematicians realised that a further logical step was necessary to translate the theory into advice as to when to gamble and when to keep your money on your pocket. The concept of expected value was part of the Pascal-Fermat solution to the problem of points
Nineteenth-century economics was developed in the context of the utilitarianism of the English philosophers Jeremy Bentham and John Stuart Mill. Individuals sought to maximise their utility and moral actions served to maximise the sum of such utilities–‘the greatest happiness of the greatest number’.
The collaboration with Savage defined Friedman’s views on the appropriate tools for analysing risk and uncertainty, views which proved influential among his colleagues and students. While Savage was initially modest about the scope of his approach, Friedman had few doubts on this–or almost any other–matter.
Kay and King take issue with this approach to risk management arguing that people tend to interpret risk in terms of unfavourable events, not beneficial ones. Risk is asymmetric not a symmetric range around an expected value. They argue that the concept of a “reference narrative” offers a better way of dealing with risk
We believe the best way to understand attitudes to risk is through the concept of a reference narrative, a story which is an expression of our realistic expectations. For J. P. Morgan, the overarching reference narrative is one in which the bank continues profitable growth. A large corporation will have many strategies for achieving that overarching objective in particular areas of its business and there will be a reference narrative relating to each business unit. Some of these business unit reference narratives may be very risky, but the corporation may tolerate such risks provided they do not endanger the reference narrative of the organisation as a whole.
They acknowledge the attraction of thinking of risk as volatility …
One reason why this view of risk as volatility has been popular among economists is that it fits well with the notion that the difference between expected utility and expected wealth is a quantitative measure of risk aversion. Risk as volatility can be compounded with risk aversion to yield a monetary value of the cost of risk. This calculation enables risk to be priced as a commodity, and bought and sold between people who have different preferences for risk,
…. but argue that the “reference narrative” is a more reliable guide to dealing with radical uncertainty.
Once we move away from the small worlds of repeated events, known frequency distributions, and pay-offs which are small relative to existing wealth, there is simply no basis for the claim that rational decision-makers should maximise subjective expected utility. Radical uncertainty is fatal to the attempt to draw an analogy between consumer choice and decision-making under uncertainty. We cannot define all possible future outcomes. Our knowledge of present and future states of the world is imperfect and even if we attached probabilities to them we would be foolish to act on these probabilities when it is likely that others have better information and understanding.
Part III: Making Sense of Uncertainty
8 Rationality in a Large World
What does it mean to be rational? One approach that Kay and King argue came to dominate economics and finance is based on the principles of “axiomatic rationality”. This starts with a priori assumptions of what it means to be rational and then assumes that people seek to maximise the “subjective expected utility” attached to decisions they make. Kay and King don’t use the term but this approach is also labelled “homo economicus”.
The approach to decision-making under uncertainty developed by von Neumann and Morgenstern and elaborated by Friedman and Savage in the 1940s sets out a definition of ‘rationality’ based not on observation or introspection, but on a set of a priori axioms. This way of thinking we will describe as ‘axiomatic rationality’. It has the logical consequence that there is something which might be described as ‘subjective expected utility’ which individuals who are ‘rational’ are maximising. Obedience to these axioms, it was claimed, defined ‘rational’ behaviour. This is not a particularly obvious way to define ‘rationality’ and it is certainly not the only possible approach. It is, however, one which has come to dominate economics.
Kay and King are that this is not the way people actually make decisions but, equally importantly, this is not the way that people should behave in a radically uncertain world. Notwithstanding the widespread adoption of the axiomatic rationality based approach, Chapter 8 identifies a range of dissenting voices.
In the early 1950’s. Maurice Allais presented evidence that the choices people make amongst lotteries violated the assumption that they were seeking to maximise expected utility. A decade later, Daniel Ellsberg observed what he called “ambiguity aversion” – people might prefer certainty to maximising subjective expected utility. And in 1978 Herbert Simon received the Nobel Prize in Economics for his research into “bounded rationality”.
Behavioural Finance offered another perspective on the choices that people make.
While Allais, Ellsberg and Simon regarded their observations as a rebuttal of the view of decision-making under uncertainty put forward by Friedman and Savage, the approach pioneered by Kahneman and Tversky adopted a markedly different stance. The subject of their critique is the decision-maker, not the model of decision-making. If the world does not conform to the model, the failure is not a failure of the model but a failure of the world, or to be precise, of the people the model is intended to describe.
At face value, Kahneman and Tversky seem to support the argument that people do not actually maximise expected utility but the problem that Kay and King see with Behavioural Finance is defining exactly what is rational
‘Biases’ can be identified only when contrasted with a counterfactual of unbiased or ‘rational’ behaviour. Since the word ‘rationality’ is powerful, it should be used with great care. But what does it mean to act rationally? Ordinary usage suggests two characteristics of rational judgement or action. First, the judgement or action would be based on beliefs about the world which were reasonable. Not necessarily correct beliefs–as we have seen, in a world of radical uncertainty we may not know even after the event what the true state of the world was…..
A second requirement of rationality is an element of internal logic or consistency. The judgement or action is appropriate given the beliefs about the world which give rise to it. This proposition requires care in interpretation. It may be difficult to distinguish errors in reasoning from mistakes in belief.
Kay and King concede that it is possible to disagree with their interpretation of ‘rationality’ …
… but that very possibility of disagreement makes an important point. The axioms of choice under uncertainty do not enjoy any monopoly on the term ‘rationality’.
Kay and King next consider three different styles of reasoning based on the work of Charles Sanders Pierce who in the late 19th Century distinguished:
- Deductive reasoning; which reaches logical conclusions from stated premises;
- Inductive reasoning; which seeks to generalise from observations, and may be supported or refuted by subsequent experience; and
- Abductive reasoning; which seeks to provide the best explanation of a unique event.
Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable. Although the term ‘abductive reasoning’ may be unfamiliar, we constantly reason in this way, searching for the best explanation of what we see …. But the methods of decision analysis we have described in earlier chapters are derived almost entirely from the deductive reasoning which is relevant only in small worlds.
In lieu of the axiomatic form of rationality favoured by economic and finance theory, Kay and King favour a simple rule proposed by the Israeli economist Itzhak Gilboa – if a decision maker would not change their decision when the alleged irrationality is explained then the choice is rational
Rational people sometimes make mistakes. But we would expect that rational people would normally agree that their judgements were mistaken when errors in either their beliefs or their logic were pointed out to them. We have considerable sympathy with the concept of irrationality put forward by the Israeli economist Itzhak Gilboa: ‘A mode of behavior is irrational for a decision-maker, if, when the latter is exposed to the analysis of her choices, she would have liked to change her decision, or to make different choices in similar future circumstances’
Kay and King note that it can be hard to make rational choices even when dealing with “small world” problems that are reasonably well specified by the laws of probability but the logic derived from reasonably maintained premises can only ever take us so far.
Under radical uncertainty, the premises from which we reason will never represent a complete description of the world. There will be different actions which might properly be described as ‘rational’ given any particular set of beliefs about the world. As soon as any element of subjectivity is attached either to the probabilities or to the valuation of the outcomes, problems cease to have any objectively correct solution.
Next Kay and King return to the question of whether the biases catalogued by Behavioural Finance are in fact irrational or “errors” in thinking
The claim to identify biases in human behaviour presupposes knowledge of what unbiased behaviour looks like. The behavioural economist claims to know the right answer, which his inept subjects fail to identify. But only in small worlds are right and wrong answers clearly identified. Most of the observed ‘biases’ in behavioural economics are not the result of errors in beliefs or logic, although some are. Most are the product of a reality in which decisions must be made in the absence of a precise and complete description of the world in which people live, in contrast to the small worlds in which the students whose choices are studied in experimental economics are asked to participate.
Such behaviour is not irrational if rationality is defined by reasonable beliefs and internal consistency. Kahneman and Tversky describe their findings as documenting systematic errors in the thinking of most people which, they argue, contradict the assumptions of social science in the 1970s that people are generally ‘rational’. But they do not present a definition of ‘rationality’ relevant to large worlds, in which problems have no well-defined solutions. Although Kahneman and Tversky acknowledge that normal human behaviour is often guided by intuition, impressions and emotions, they regularly return to versions of the observation that ‘there are distinctive patterns in the errors people make. Systematic errors are known as biases, and they recur predictably in particular circumstances.’ The fact that such deviations are said to be widespread and predictable should make us sceptical of the claim that they are errors.
Kay and King are not arguing that Behavioural Finance has no value …
Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings.
Kahneman offers an explanation of why earlier and inadequate theories of choice persisted for so long–a ‘theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws’. We might say the same about behavioural economics. We believe that it is time to move beyond judgemental taxonomies of ‘biases’ derived from a benchmark which is a normative model of human behaviour deduced from implausible a priori principles. And ask instead how humans do behave in large worlds of which they can only ever have imperfect knowledge.
Kay and King consider the ways in which Behavioural Finance is used to justify “nudging” peoples choices
Failure to appreciate the fact that people are struggling to know what it is to be rational in a world of radical uncertainty leads to the conclusion that their ‘mistakes’ should be corrected by policy interventions, and recommendations of this kind have been suggested by Richard Thaler.
They express some sympathy for some of Thaler’s suggested interventions to ‘nudge’ people towards more appropriate behaviour but this is qualified by the Gilboa test
…. would people agree that the actions into which they are ‘nudged’ are indeed in their best interests, and that their un-nudged behaviour was irrational, once the ‘right’ decision is explained to them?
Overall, their approach would appear to favour less nudging
But a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know. As we explained in chapter 1, it is extremely difficult to assess how much any particular individual should invest in a pension plan. And the prescriptive tone of much of what is written in behavioural economics makes apparent the reality of the danger of well-meaning illiberalism.
Kay and King also examine Herbert Simon’s concept of “bounded rationality” based on the observation that complexity makes it impossible to evaluate the consequences of all possible outcomes and select the most advantageous.
Simon recognised that radical uncertainty prevented people from behaving in the optimising manner defined by a priori axioms. And so he argued that ‘more than minor tampering with existing optimization theory is called for’. He was anticipating, but not preventing, the subsequent development of a large literature based on such minor tampering.
Simon was interdisciplinary in his interests and research, and was also a pioneer of artificial intelligence. He suggested that one way in which people might approach decisions in a radically uncertain world was to use a rule of thumb to search for a ‘good enough’ outcome. Such behaviour was described as ‘satisficing’, and in practice can deliver superior outcomes to actions selected by optimising behaviour. The reason is that to pretend to optimise in a world of radical uncertainty, it is necessary to make simplifying assumptions about the real world. If these assumptions are wrong–as in a world of radical uncertainty they are almost certain to be–optimisation yields the wrong results …
Kay and King argue that the concept of “bounded rationality” developed by Simon has been interpreted to be something quite different from what Simon intended.
Economists have adapted the phrase ‘bounded rationality’ to mean something very different from Simon’s description as the consequence of radical uncertainty. They have instead used it to describe the cost of processing information, which then acts as an additional constraint in an optimisation problem. Bounded rationality, in this sense, adds to the optimisation calculation the costs and benefits of obtaining the information which we choose not to have. Of course, this is not what Simon meant. Nor, indeed, does it make much sense as a description of any process with practical application. The implications of bounded rationality are not represented by adding computational costs to an optimisation problem. Bounded rationality as proposed by Simon reflects the challenges of making decisions governed by reason and logic under radical uncertainty in which no computable solution is available.
Kay and King argue that further examples of people choosing simple workable as opposed to complex optimised solutions can be found in the work of Gary Klein and Gerd Gigerenzer.
Klein describes the reality of decision-making in complex situations, which require the search for a workable solution rather than a process of optimisation. Both of us have experience of the academic committee which cannot reach a decision because it is always possible that there might be a better option than the one which is on the table: ‘the best is the enemy of the good’. Real people do not optimise, calculate subjective probabilities and maximise expected utilities; not because they are lazy, or do not have the time, but because they know that they cannot conceivably have the information required to engage in such calculation. Nevertheless good decision-makers, like Klein’s firefighters and paramedics, or Warren Buffett or Steve Jobs, are rightly respected for their judgement.
Simultaneous with the work of Kahneman and Tversky, but to less public attention or acclaim, the German psychologist Gerd Gigerenzer has led a group of researchers at the Max Planck Institute in Berlin. Gigerenzer and his colleagues are focused less on ‘biases’ than on the ways in which real people make actual decisions on the basis of necessarily limited information. The group has emphasised the value of simple heuristics–or rules of thumb–in enabling us to resolve situations characterised by radical uncertainty. Gigerenzer and his colleagues have promoted a toolbox comprising ‘fast and frugal’ heuristics.
9 Evolution and Decision-making
In this chapter Kay and King argue that the biases that behavioural economics characterises as signs of human failure to reason in a rational manner should in fact be seen as evidence of successful risk management strategies that are beneficial in a world of radical uncertainty. In essence they argue that evolution is smarter than economists.
There is an alternative story to that told by behavioural economics. It is that many of the characteristics of human reasoning which behavioural economics describes as biases are in fact adaptive–beneficial to success–in the large real worlds in which people live, even if they are sometimes misleading in the small worlds created for the purposes of economic modelling and experimental psychology. It is an account which substitutes evolutionary rationality for axiomatic rationality.
They argue that we cope with the future by organising our lives around reference narratives. These reference narratives are not necessarily worked out in specific detail but they provide a basis for planning and a framework for day-to-day choices. The argument they are making here seems to me to have parallels with the argument Yuval Noah Harari makes in Sapiens that homo sapiens success as a species can be attributed in large part to the way in which we cooperate and communicate.
The advantage of being better able to cope with radical uncertainty was one of the primary drivers of the emergence of unreciprocated altruism in these societies; their cultures evolved to expect nice behaviour and punish nasty behaviour. The larger and more diverse the pool of people who might provide help, the less vulnerable are individuals and households to unanticipated events.
This emphasis on communication reinforces the notion that what may be ‘biases’ in individual problem-solving behaviour in well-defined puzzles are actually advantageous in the group resolution of the ill-defined problems posed by uncertainty.
Loss aversion that appears irrational in a value maximisation context, might in fact be considered to be a useful bias in an evolutionary context
A predisposition to avoid large losses is a useful attribute. These actions are not the result of any calculation. Not just because we don’t have the time, or the information, to make any calculation of subjective expected utility, although it is relevant that we don’t. But also because, as Taleb has emphasised, evolution favours those who survive, and they are not necessarily those who maximise expected value.
For an individual, choosing the strategy most likely to succeed maximises expected winnings. But a group made up of such optimising individuals is eventually wiped out by infrequent calamities.
Kay and King argue that a bias to confidence and optimism is also useful
To adherents of axiomatic rationality and behavioural economists alike, optimism is a ‘bias’, leading to errors in calculations of subjective expected utility. We are not so sure. Admiral James Stockdale, the American naval officer who was imprisoned and tortured in Vietnam, attested effectively to the evolutionary value of confidence and optimism.
Churchill’s ringing defiance in the face of adversity was the product of his relentless confidence and self-belief, not the outcome of considered reasoning. Sometimes, but only sometimes, this self-assurance was justified. It led to his hair-raising exploits in the Boer War, his enthusiasm for disastrous policies such as the Gallipoli expedition, and his persistent, unsuccessful resort to the gaming tables. And excessive self-confidence can be a dangerous trait in a political leader. But in the right circumstances–those of 1940–Churchill’s optimism and confidence were vital.
The value of being confident is however context dependent
Evidently optimism is productive, but more so when contained and channelled. Overconfidence is generally a disaster at the gaming table but vital to the leader inspiring team-mates, business colleagues or troops.
Certainly, Branson’s behaviour was reckless, Churchill’s bravado often foolish, and Jobs’s career included repeated failures. But neither they nor the world would have been better off if they had learnt to control their ‘biases’ and suppress their ‘irrationality’. And that observation exposes the limitations of the concept of axiomatic rationality. Churchill, Jobs and Branson lived in a large world, not the small world in which rational behaviour can be reduced to a mathematical calculation in the context of a well-defined problem and complete knowledge of the environment. Their achievements remind us of the insight which Frank Knight described a century ago, too long forgotten; the connection between radical uncertainty and entrepreneurship. As Keynes observed, the spirit of enterprise dies when mathematical expectation takes over. Risk-taking behaviour which might appear inconsistent with axiomatic rationality is the central dynamic of a capitalist society–a key part of ‘the secret of our success’.
Summing up …
For too long, the type of intelligence that is necessary to cope with a world of radical uncertainty has been underestimated and undernourished. The concepts of biases and noise, which underpin much of behavioural economics, are, when taken to the lengths of a general theory, inconsistent with radical uncertainty, with evolution, and with the collective nature of much human decision-making.
10 The Narrative Paradigm
This chapter explores the ways in which the role narratives and reciprocity play in human decision making is undervalued or ignored by models that emphasise axiomatic rationality and value maximisation.
To observe that any decision starts from the question ‘What is going on here?’ seems obvious and even trivial. But as Rumelt, with decades of relevant experience, immediately recognised, it is not. That mode of analysis is certainly very different from the approach of the Bayesian dial, in which lone decision-makers begin from a set of prior beliefs and watch the dial move back and forth as new information becomes available to them.
Kay and King argue that having prior opinions about everything is one of the principal characteristics distinguishing the bad decision-maker. They associate prior opinions with the prior hypothesis in Bayesian thinking and this seems to be one of the key reasons they discourage using it. This seems overly critical of the Bayesian approach to me. A true Bayesian would be open to evidence that disproved the prior hypothesis. That said Bayes might be best suited to small world problems and so it might be fair to argue that large world problems characterised by radical uncertainty are outside its zone of validity.
Kay and King are also concerned with the ways in which economists often completely miss the value of reciprocity in human relationship
The classic anthropological study of reciprocity in human relationships is Marcel Mauss’s 1925 essay The Gift. Mauss asked, ‘What power resides in the object that causes its recipient to pay it back?’ ‘The objects are never completely separated from the men who exchange them.’ Mauss’s thesis would be summarised in the popular adage ‘there’s no such thing as a free lunch’, which fifty years later would be the title of a book by Milton Friedman
Everyone understands that gift-giving is part of the process of establishing and cementing social relationships, and that these relationships serve economic as well as social purposes. (Except, perhaps, a few economists: Joel Waldfogel’s short article in the American Economic Review on ‘The deadweight loss from Christmas’ showed that the monetary value recipients attached to gifts was significantly less than their cost.
As Mauss explained, gifts generally entail some notion of reciprocity. But the more remote the ties of community or kinship, the greater the need for some sort of equivalence in exchange. Until one reaches the purely commercial exchange, in which the transaction becomes an anonymous one between parties who cannot have any other relationship with each other because they do not know who they are. Such exchanges require some sort of accounting system, in which credits and debits are recorded.
11 Uncertainty, Probability and the Law
This chapter examines the way in which lawyers use and sometimes misuse probabilistic thinking but also the value of “narrative reasoning”
In modern times, it has been argued that lawyers have much to learn from probabilistic mathematics and Bayesian reasoning. There is some validity to this view, but more to its opposite–that devotees of probabilistic reasoning have much to learn from the accumulated wisdom of the practice of law.
Probabilistic reasoning and Bayes’ theorem are helpful, even indispensable, when a narrow issue in a legal case can be expressed as a small-world problem–as in dismissing Simpson’s claim that some other perpetrator left at the scene of the crime DNA which happened to be an exact match for his own. But the use of probabilities in court is frequently confused and confusing,
whatever frequencies might tell us about probabilities, such statistics cannot, without a narrative context, form a basis for a conviction. We need a story. Narratives are the means by which humans–as judges, jurors or people conducting the ordinary business of life–order our thoughts and make sense of the evidence given to us.
A ‘good’ explanation meets the twin criteria of credibility and coherence. It is consistent with (most of) the available evidence, and the general knowledge available to judges and jurors.
In that process of rational deliberation, statistical reasoning can assist, but never replace, narrative reasoning. We develop narratives and use them to convince others of our point of view. A world of radical uncertainty is one governed not by statistical distributions but by unique events and individuals. Justice requires a process of legal reasoning which respects that uniqueness.
12 Good and Bad Narratives
No one ever made a decision because of a number. They need a story.—DANIEL KAHNEMAN1The Undoing Project; Michael Lewis (2017) p 250
There is truth in literature as well as in mathematics and science. As Michael Lewis concluded when trying to explain the implications of the experimental findings of Kahneman and Tversky, ‘the stories we make up, rooted in our memories, effectively replace probability judgements’.2
Given the importance they assign to reference narratives, Kay and King devote a chapter to exploring how to distinguish good and bad narratives. They argue that credibility is a core feature distinguishing good and bad narratives and this is closely related to coherence: a story is coherent if its components are internally consistent.
They note the enduring attraction of a grand narrative that explains everything
Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.
… but argue that we should always be sceptical of the capacity of any one narrative to explain everything. They quote with approval the distinction between the hedgehog and the fox
… the distinction, first made by the Greek poet Archilochus, developed by Tolstoy and subsequently popularised by Isaiah Berlin, between the ‘hedgehog’ and the ‘fox’. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.
This is not saying that one approach is necessarily better than the other, the best approach to a problem will always be context dependant and most problems will benefit from being analysed from different perspectives.
The world benefits from both hedgehogs and foxes. Winston Churchill and Steve Jobs were hedgehogs, but if you are looking for accurate forecasts you will do better to employ foxes.
Kay and King discuss the use of scenarios in business planning as a way of understanding exposures to radical uncertainty and building resilience
Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..
Scenario planning is a way of ordering thoughts about the future, not of predicting it.
The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.
Kay and King argue that narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future
We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.
Narratives are also how we communicate the decisions that we have made
In a world in which to list all possible outcomes and their probabilities would be impossibly complex, narratives are an essential part of how we reason. But they are not just a way in which we provide ourselves with the ‘best explanation’. They play a crucial role in how we communicate with each other, and how we reach collective decisions. Narratives change and evolve over time, and need to be constantly challenged.
The chapter concludes by noting that there are a variety of processes that we use to create and present narratives. Some narratives are based around numbers, others based around models. This serves as an introduction to the two subsequent chapters
- Telling stories through numbers
- Telling stories through models
13 Telling Stories Through Numbers
Numbers offer a way of succinctly describing some relationships in the world we are navigating. This chapter explores the ways in which statistics and surveys have both helped to provide insight but also to mislead.
The normal distribution figures highly in this account
The remarkable discovery was made that a relatively few mathematical formulae were capable of describing many different observed phenomena. If you knew a little about the distribution, and the appropriate formula for that class of distribution, you could use that information to calculate the whole of the distribution. De Moivre’s bell-shaped ‘normal’ distribution, introduced in chapter 4, gained that name because it was encountered so frequently. The first applications were in physical sciences, such as astronomy, but in the nineteenth century the Belgian Adolphe de Quetelet showed that many social phenomena also followed a normal distribution. It is no exaggeration to say that this discovery opened up the social sciences to the application of the quantitative methods which had been the basis of natural sciences.
And the Central Limit Theorem
The Central Limit Theorem of probability states that if a variable is the sum of a large number of factors which are themselves random and independent of each other, the resulting distribution of that variable will be normal.
And Power Laws
… power law … has markedly different characteristics from the distributions of classical statistics. In the normal distribution, the mean (average), median (middle) and mode (most frequently observed) outcomes are the same. These measures of central tendency will differ somewhat if the distribution is lognormal, but the basic pattern in which most observations cluster around the centre is common to all. But the properties of power law distributions are significantly different. In particular, extreme outcomes are much more frequent, and the average value of some power law distributions cannot be calculated.
Power laws do better than normal and lognormal distributions in picking up the extremes of market fluctuations, which is important for controlling risk and understanding long-run patterns of returns.
Despite the apparent wide scope of power laws, and the aesthetic appeal of the underlying mathematics, this type of analysis has received far less attention from economists and statisticians than the traditional distributions of everyday statistics, such as the normal. 10 Too much intellectual capital has been invested in assumptions which, although adequate most of the time, fail in situations which give rise to financial crises and other extreme, and hence important, outcomes.
14 Telling Stories Through Models
All models are wrong, but some are useful.—GEORGE BOXBox (1979) p202
Kay and King run through a short summary of some key models of economic relationships that are useful for making sense of the world. They start with the Prisoner’s Dilemma (one of my personal favourites for making sense of many of the issues in banking) as one of the most insightful and fruitful of these economic models.
The objective of this kind of modelling is to turn a mystery into a puzzle–to find a problem which is much simpler, which has a defined solution and yet bears sufficient resemblance to the substantive problem to yield insight and illuminate the best course of action. Following Savage, we describe these as ‘small world’ models. From its very beginnings, useful economic theory has generally been of this kind.
Ricardo’s theory of comparative advantage is another example of a model that helps us make sense of economic principles that are not always intuitively obvious
Ricardo’s model showed that trade can bring significant benefits whenever there are differences in capabilities either between individuals or between countries. A country’s absolute advantage in producing different goods and services was less important than its comparative advantage: in which sector or sectors was the country relatively more productive? The model does not enable us to forecast the volume of trade, but does help us understand why, in the absence of artificial impediment, trade has flourished between countries at very different stages of economic development. And how untutored intuition can mislead.
Akerlof’s theory of asymmetric information and adverse selection is another useful model
Even those without any formal training in economics understand that prices are set by the interplay between supply and demand. If goods remain unsold, then a fall in the price should stimulate demand and clear the market. But in some markets things don’t seem to work that way. One reason is that sellers may know far more about the quality of the goods they are selling than do the buyers, and buyers recognise that fact. In 1970, George Akerlof showed that in the presence of this ‘asymmetric information’ it may be difficult to find any price at which trade occurs.
Adverse selection may arise in many markets. Health insurance depends on pooling risks, but the healthy will seek to drop out and the less healthy will be anxious to obtain coverage. In practice, health insurance works well only when there is some compulsion to join. Like many good ideas, the problem of adverse selection may seem obvious when explained but the idea has proved immensely helpful in understanding a range of markets, and explaining why some of these markets do not function well.
The Efficient Market hypothesis also has a role to play in helping us navigate the choices we make in the real world
The efficient market hypothesis is one of the most controversial models in economics–so controversial that in 2013 Eugene Fama, who developed the model, shared the Nobel Prize with Robert Shiller, who has worked to refute it. The essential insight is that publicly available information is incorporated in securities prices.
Both supporters and critics of the efficient market hypothesis appear to make the mistake of believing that such a model describes ‘the world as it really is’. The efficient market hypothesis is the archetype of a model which is illuminating without being ‘true’.
Kay and King note that it is no longer fashionable to tell a story with illustrative calculations in the manner of Smith and Ricardo. More formal mathematical expression is considered to be the mark of a good model and, for some economists at least, more complicated math gets extra marks.
In the early 1950s, an American, Kenneth Arrow, and a Frenchman, Gerard Debreu, used fixed point theorems (drawn from the latest advances in topology) to prove, under certain assumptions, the existence and efficiency of an equilibrium of a competitive market economy. But although their mathematics is complicated, the conclusions are not; the authors provided a clear statement of the conditions under which a decentralised economy could successfully match supplies and demands, and offered a further expression of the conditions under which that equilibrium might be in a certain sense efficient. For many people, Arrow and Debreu provided the formal mathematical underpinning to Smith’s narrative of the ‘invisible hand’.
One of the issues with mathematical models is that the assumed relationships in the models do not always align with the way that people actually make decisions. Friedman and Savage recognised this issue but argued it di not matter so long as people acted “as if” the modelling assumptions applied.
The ‘as if’ argument of the billiard table was further developed by Friedman, who asserted that it was inappropriate to regard the realism of assumptions as germane to the validity of a theory. Instead, he claimed, ‘the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic,” for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions.’
The philosopher of science Daniel Hausman has described Friedman’s article as ‘the only article on methodology that a large number, perhaps a majority, of economists have ever read’; and for the authors of this book that was true for many years. 21 Hausman quickly demolished Friedman’s argument, as many others have done, by pointing out that the premises of a theory are every bit as much predictions of that theory as deductions from those premises. Friedman’s article appeared in a brief period of intellectual history in which a version of Popperian falsificationism–the idea that a hypothesis acquires scientific status only if there is a possibility that it might be refuted–was in fashion.
The decisive rejection of this falsificationist view is encapsulated in what philosophers know today as the Duhem–Quine hypothesis: such refutation is rarely definitive, because any test requires a range of auxiliary assumptions, additional assumptions about the world, and it is always possible to argue that these assumptions have not been fulfilled.
15 Rationality and Communication
This chapter describes the way in which our ability as a species to communicate is fundamental to many of the things we have achieved. It serves I think to highlight why the “reference narrative” or telling stories plays such a large part in Kay and King’s proposed way of dealing with radical uncertainty.
Kay and King propose an alternative form of rationality based on communication between members of our species both at a point in time and passed down across generations. We judge decisions not by whether they optimise expected value but rather by the coherence and credibility of the narrative used to communicate and justify them. A decision is rational if the justification is coherent and credible. The accumulated body of knowledge developed in this way also contributes to a kind of evolutionary rationality or survival of the fittest most useful ideas and rules of thumb.
Coherence and credibility – the standards by which we judge all narratives – are the criteria by which we judge the quality of communication of the reasons for decisions. And in modern western culture we apply the test of rationality – consistency with logic and reason – to such communication. …. Our actions are the outcome of evolutionary rationality; communicative rationality is the means by which we explain these actions to others….
Humans are a eusocial species, achieving things which are far beyond the capacity of any individual. The economic importance of this eusociality cannot be over-emphasised.
This last quote (below) highlights the way in which our ability to communicate and cooperate is fundamental to our capacity to cope with radical uncertainty. Another observation (I don’t think Kay and King make this point directly) is that our ability to cooperate and communicate is also a factor in explaining the complexity and multiple entanglements of the radically uncertain world we have created.
Human intelligence is collective intelligence, and that is the source of the extraordinary human economic achievement. We can navigate radical uncertainty because we chart it together.
16 Challenging Narratives
‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’ALFRED P. SLOAN 1
Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.
… the American psychologist Irving Janis popularised the term ‘groupthink’ for the process by which a group arrives at a bad decision because of the unwillingness or inability of its members to challenge the prevailing narrative.
Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.
the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.
Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.
The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.
robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.
Kay and King are not fans of risk maps
The nearest commonly used approximation to ‘Bayesian updating’ we have seen is the risk maps which it is almost obligatory to present to corporate boards. Typically these maps will consist of long lists of so-called ‘risk factors’, often accompanied by gradings of relative importance and illuminated with red or green traffic light signals. These documents follow a standard design, and are at best tenuously related to any statement of business strategy. The reports are typically received in near silence and the fact of their receipt minuted. The purpose is less to ensure that risks do not materialise than to provide evidence that the risks have been considered, or at least described–evidence which can be mobilised if these risks do materialise.
The better approach is to try to identify the small number of risks which might seriously derail the reference narrative, and to consider the contingencies which might be deployed to deal with them.
Part IV: Economics and Uncertainty
17 The World of Finance
This chapter explores the tension between the way risk is defined in the world of finance (i.e. an impersonal objective property of assets) and the way risk is experienced by the individuals who have to make risky choices. It returns to the theme that choices that may appear irrational from a finance perspective of risk are quite logical and rational from the perspective of the individuals making the choices.
Risk-averse individuals are those who are reluctant to move outside the comfort zone of their established reference narrative. They seek certainties in a world of radical uncertainty by trying to limit themselves to a small, stationary world ….
While the collapse of the East German state brought opportunities for many, others were disconcerted by the loss of security offered by the constraints of its repressive regime. Risk lovers such as Musk or Orwell, by contrast, are constantly in search of new reference narratives–and through that search change, for better and worse, the reference narratives of everyone else.
Risk is determined by the circumstances, and reference narrative, of individuals or businesses. Finance professionals … think differently. In their world, risk is an impersonal, objective property of assets. A major contribution to such thinking originated in the work of Harry Markowitz at the University of Chicago in the 1950s.
Kay and King note that the efficient portfolio model, the capital asset pricing model and the efficient market hypothesis employed in the world of finance illustrate how simple models can be used to illuminate complex problems. The critical insight of the portfolio approach is that risk is a property of a portfolio as a whole, and cannot be judged by simple addition of the risks associated with each element of that portfolio. However ….
We are glad we know about these small-world models and we think we are better investors for knowing about them. But we do not make the mistake of taking them too seriously, and we certainly do not believe that they describe ‘the world as it really is’. Both Markowitz and Savage were well aware that their theories applied only to such small worlds, but their warnings have been largely ignored.
Kay and King discuss how this style of thinking about risk (risk = asset price volatility) shaped the development of financial regulation. Nowhere, they argue, has the tension between different meanings of risk been more stark, and more damaging, than in financial regulation.
Both commercial bankers and regulators believed that while retail funding might suddenly dry up, wholesale funding would always be available at a price. This reasonable assumption turned out to be wrong. Northern Rock was felled by an off-model event.
The abject failure of models in the global financial crisis has not dented their popularity among regulators. European directives–known as Solvency II–have extended the use of similar models to the insurance sector, and a pension fund regime is likely to follow. But insurance companies rarely fail as a result of low-probability events described by risk models, but in consequence of off-model issues such as fraud or–as at Northern Rock–the realisation of narratives which had not been imagined by management or regulators.
They see similar potential for adverse outcomes in the application of this concept of risk to the technical valuation of assets and liabilities in defined benefit pension models
The regulatory regime seeks to reduce risk–in a world of radical uncertainty risk can never be eliminated–by prescribing a reference narrative so demanding and financially unattractive that no one will sensibly aspire to it. A combination of well-meaning but misguided regulation and the misuse of models has materially reduced the prospects of a secure retirement for a majority of the British population.
The general point they are making is that the recent experience of financial regulation illustrates the importance of avoiding the pretence of knowledge.
We do not know when the next crisis will come, nor what it will look like. We need simple, robust principles to guide us, not tens of thousands of pages of detailed rules which elevate the duty of compliance over the spirit of proper stewardship of other people’s money.
Kay and King next discuss the work that Robert Schiller has done on understanding the role of narratives in driving large and disruptive changes in economic behaviour. Shiller’s approach does not reconcile with theirs
Shiller’s focus on narratives is rather one-sided. He uses the concept to explain behaviour which others have called ‘fads and fashions’. In other words, he sees narratives as a departure from ‘rational’ optimising behaviour and therefore as irrational and emotional, despite their importance in explaining behaviour. In his words, ‘among normal people, narratives are often somewhat dishonest and manipulative’ and ‘economic narratives thus tend to involve . . . actions that one might take for no better reason than hearing narratives of other people doing these things’.
But the importance of narratives stems not from a weakness in human behaviour but from the nature of decision-making in a world of radical uncertainty. It is true that in certain financial markets narratives are occasionally ‘dishonest and manipulative’, but normal people make honest use of narratives to understand their environment and guide decisions under radical uncertainty. A narrative is needed to answer the question ‘What is going on here?’
They discuss the rise of crypto currencies as another example of the way in which reference narratives operate. This case not a narrative that stands up to scrutiny in their view (nor mine) but it illustrates the way in which a narrative can act as a call to action.
as we write, the financial press is full of perhaps the thinnest story since tulips to give rise to a bubble–the imagined future takeover of the world monetary system by crypto-currencies. Like other popular fictions, the Bitcoin phenomenon combines several perennial narratives–in this case, a libertarian vision of a world free of state intervention, the power of a magic technology, and the mystery of ‘money creation’.
Securitisation is another example of a narrative drawn illustrating the tension between how risk is framed in the world of finance an d how it plays out in the real world
In the 1980s, bond markets, once staid backwaters of the financial system, became the focus of an exciting new narrative based on securitisation. The idea was that lending institutions–banks, mortgage providers and finance companies–could package their loans into tradeable securities and sell them, mainly to other financial institutions. By doing so, they could supposedly offer more attractive combinations of risk and return than had previously been available in bond markets.
There were, and are, two possible rationales for such trade. One narrative saw securitisation as a mechanism enabling lenders to spread and diversify lending risks, thus lowering the costs of finance and–perhaps–allowing some of the benefits of these lower financing costs to be passed on to home buyers and small businesses. An alternative narrative was that securitisation provided the mechanism by which risks could be passed from those who understood or should have understood them–the original underwriters–to people who understood them less well.
Kay and King wrap up the chapter with a reflection on the limits of finance theory. They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama and the promise of practical utility but we must also acknowledge the ways in which it has failed to live up to its promise.
We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.
Kay and King argue that what Buffet, Soros and Simons have in common is exceptional intelligence and a degree of humility that acknowledges the limits of their knowledge, but perhaps most strikingly
all ignore – are even contemptuous of – the corpus of finance theory based on portfolio theory, the capital asset pricing model and the efficient market hypothesis. Indeed that corpus of knowledge implies that they could not have succeeded as they have. These financial models emphasise points of which all investors should be aware – the benefits of diversification, the extent to which different assets offer genuine opportunities for diversification, and the degree to which information is incorporated in securities prices. But the lesson of experience is that there is no single approach to financial markets which makes money or explains ‘what is going on here’, no single narrative of ‘the financial world as it really is’. There is a multiplicity of valid approaches, and the appropriate tools, model-based or narrative, are specific to context and to the skills and judgement of the investor. We can indeed benefit from the insights of both Thales of Miletus and Harry Markowitz, and learn from both of the contradictory narratives of the world of finance propagated by Gene Fama and Bob Shiller. But we must also recognise the limits to the insights we derive from their small-world models.
There are those in the finance sector who create programs which purport to define strategies that would maximise risk-adjusted returns. But these programs do nothing of the kind. Radical uncertainty precludes optimising behaviour. In the world as it is, we cope rather than optimise. The numbers which were used in these calculations are invented. Or they are derived from historic data series and assume a non-existent stationarity in the world. Struggling to cope with a large world which they could only imperfectly understand, the proponents of these calculations invented a small world which gave them the satisfaction of clear-cut answers. And financial regulators claiming to monitor risk in the financial system did the same. It is understandable that people who are given a job which they cannot do find instead a more limited task which they can do.
18 Radical Uncertainty, Insurance and Investment
This chapter explores the implications of radical uncertainty for the management of insurance and investment risk.
Certainty is unattainable and the price of near certainty unaffordable.
The focus on extreme percentiles raises the more fundamental issue of the Viniar problem–extreme outcomes rarely come from 25 standard deviation observations, but from off-model events. The historic extremes of the UK and US stock markets were the falls of 1972–4 and 1929–33 respectively. These were not adverse drawings from some underlying stationary probability distribution. They occurred because in each period the narrative that the capitalist system was coming to an end gained currency, and not without justification.
The advance of big data means that this element of randomness will steadily diminish. Insurers can already obtain information through a device that monitors your personal driving behaviour, and the premium can mirror more and more exactly the losses which will result from that behaviour. As insurance becomes precisely tailored to the individual, and the element of randomness is reduced, it ceases to be insurance. 8 As more data for medical diagnostics become available we will progressively know more and more about the health prospects of any individual. And as Alexa reports back to her employers, more and more data about everything become available. When risks become certainties they cease to be insurable.
But apocalyptic events of the kind which were feared at these times in Britain and the United States, but did not occur, did in fact occur in China, Germany, Russia and some other countries in the course of the twentieth century. There are no certainties in the world of business and finance. And such certainty as is available comes from achieving robustness and resilience through diversification rather than from committing to one ‘safe’ asset type. Risk is a personal experience, not a characteristic of an asset.
Kay and king are not advocating that we ignore the insights that corporate finance offers
In the previous chapter, we described the three pillars of modern finance theory – efficient portfolio theory, the capital asset pricing model and the efficient market hypothesis. As we suggested there, the rational investor in a world of radical uncertainty must know these models, but should not take them either too literally or too seriously. The central insight of the Markowitz portfolio frontier model is that risk is the product of a portfolio as a whole and is not the sum of the risks associated with the individual investments within it. The importance of this observation for practical investment strategies can hardly be overstated. And this insight remains valid whether the meaning attached to the concept of risk is, as we prefer, based on a reference narrative, corresponding to the dictionary’s definition as ‘failure to meet realistic expectations’, or the financial economist’s definition of risk as ‘variance of a probability distribution’.
Within portfolio theory, it is necessary to know the variance of the probability distribution of daily gains or losses on each of the securities in your portfolio, and also the covariances between these securities …
But radical uncertainty means that such calculations of correlations based on historic data sets represent a fool’s errand. In most cases we simply do not know the variance of the relevant probability distribution, or the covariances. Returns on investment are not random drawings from a known and stationary underlying process. Yet much financial analysis and much financial regulation are predicated on the assumption that they are.
Perhaps that is why, although we know many people who construct efficient portfolio frontiers for their clients, we know none who use that approach to manage their own personal finances.
Kay and King again emphasise that risk in their assessment is ultimately personal
Behind the efficient portfolio frontier and capital asset pricing model lies the idea that individuals make similar assessments of the underlying probability distribution. Since the model assumes that everyone in this small world interprets risk in the same way, differing only in their ‘risk appetite’, the proposition that higher risk implies higher reward and vice versa follows inexorably.
But risk means different things to different people. Risk for the government is very different from risk for the defence contractor, very different for someone saving for the deposit on a house than for someone seeking to provide for a secure retirement. Risk for an asset manager, whose reference narrative involves continued employment, is the risk of being fired for underperforming compared to his or her peers. And for a devotee of the capital asset pricing model, risk is the variance of short-term stock price movements. If your concept of risk is very different from that of the market as a whole, you can minimise your risk at other people’s expense. Broad diversification becomes ‘a free lunch’ reducing risk without cost.
19 (Mis)Understanding Macroeconomics
This chapter covers the rise of a branch of economics based on rational expectations and where, in their opinion, that has gone off track. Rational expectations is a response to a previous school of “Keynesian” economics which employed models that relied on simple stationary relationships between economic aggregates. That seemed like a reasonable assumption during the 1950s and 1960s when the world experienced a golden age of stability and growth. The rise of inflation and unemployment during the 1970s challenged the thesis that Keynesian demand management could eliminate economic instability.
The rational expectations revolution
It became clear that economic outcomes were shaped by expectations and that new models were required to capture this.
The drive to achieve this was led by Lucas ….whose 1976 ‘Lucas critique’ was the death knell of the econometric macroeconomic models of the previous decade. If policies influenced expectations, policy-makers could not rely on the stationarity of underlying economic processes. The analogy with physical relationships, such as the impact of wind on structures, did not hold.
An obvious means of responding to the prior neglect of expectations would have been to undertake empirical work on the beliefs about the future which consumers and those engaged in business and finance actually held, and the processes by which they established and changed such beliefs. But little such research was undertaken. The new macroeconomic theorists instead followed a different approach;
These theorists–Chicago was and remains a centre of their thinking–followed the dominant paradigm of the universal applicability of subjective probability. Assumptions about expectations were deductions about behaviour based on axiomatic rationality. The resulting theory of ‘rational expectations’ requires that the expectations of all agents–firms, households and governments–must be consistent not only with each other but with the model which purports to describe it.
Kay and King argue that models based on rational expectations did offers insights on some important issues. They are in a sense “useful parables” but …
they do not describe the world as it is and fail to help us understand depressions and financial crises…
And while the critics of the 1970s were right to direct more attention to how individual choices affect aggregate outcomes, ‘the secret of our success’ as humans is that we benefit from both individual and collective intelligence. Humans are social animals and there is more to the behaviour of the group than the aggregate of independent individual decisions. Expectations need to be studied at both individual and aggregate level.
Completeness and the grand auction
Ever since the eighteenth century, when Adam Ferguson described ‘spontaneous order’ and Adam Smith supposedly lauded the ‘invisible hand’, the notion that decentralised markets might allocate resources more efficiently than central planning had been a theme of economic analysis. In the nineteenth century, Leon Walras, a French economist working at the University of Lausanne, attempted to express in a system of equations the idea that the uncoordinated decisions of millions of people might produce aggregate outcomes that were not only coherent but efficient.
But Walrasian analysis only reached fruition when, as described in chapter 14, new and powerful mathematical tools were applied to economics by Kenneth Arrow and Gerard Debreu. For some devotees of laissez faire, this was the analysis they had been waiting for–a rigorous mathematical demonstration of the maxim that ‘you can’t buck the market’. Building on Walras, Arrow and Debreu envisaged a ‘grand auction’, to which consumers brought their demand curves, workers and resource owners their supply curves, and producers their technical capabilities. In this ‘grand auction’ the price mechanism secured an equilibrium which reconciled all these demands and supplies and in which no one could be better off without making someone else worse off–all possible mutually advantageous trades had been realised.
But in a radically uncertain world, markets are necessarily incomplete.
Arrow and Debreu recognised that they were describing an imaginary world akin to that of Through the Looking-Glass. And they interpreted that world as a rhetorical device, like those literary fictions, illustrative of propositions which might–or might not–be true in any real world.
… Arrow described what he and his colleagues had been attempting to do: ‘The immediate “common sense” answer to the question “what will an economy motivated by individual greed and controlled by a very large number of different agents look like?” is probably; There will be chaos . . . Quite a different answer has long been claimed true . . . In attempting to answer the question could it be true we learn a good deal about how it might not be true.’
This equivalence between complete markets and the axiomatic basis for probabilistic reasoning is no academic footnote. Many economists today are prepared to accept that markets are incomplete and yet hang on to the view that a complete set of subjective probabilities exists and people can be assumed to behave as if they were maximising their subjective expected utility. But these views are essentially incompatible. Savage’s rational decision-maker made a ‘grand decision’, contemporaneous with the ‘grand auction’. Savage’s world was also the world of Arrow and Debreu, and, like Arrow and Debreu, Savage was clear that the proposition that the models closely replicated real worlds was, in his own words, ‘utterly ridiculous’.
Making policy in a small world
Many of the followers of Lucas forgot that the purpose of building models is to use imagination so that we can tell plausible stories about the real world.
Economic predictions fail when models are disrupted by permanent shifts and temporary shocks. But since the shifts and the shocks are the product of forces which are unknowable, there is, sadly, nothing more that can usefully be said.
Radical uncertainty and non-stationarity go hand in hand. There is no stable structure of the world about which we could learn from past experience and use to extrapolate future behaviour. We live in a world of incomplete markets in which there are simply no price signals to guide us back to an efficient equilibrium. There are times when expectations have a life of their own. As a result, the models used by central banks perform quite well when nothing very much is happening and fail dramatically when something big occurs
Engineering versus economics
If a theory ‘tested through successful use’ was available which could determine whether the stock market was overvalued and about to crash–of course there is no such theory–that knowledge would itself change the value of the stock market. That is the essence of the Lucas critique which we described above, and of the efficient market hypothesis. (As a matter of history, the stock market in late 1999 was overvalued and about to crash and a few months later did crash, but the problem of identifying when it would crash was altogether different from and harder than the problem of whether it would crash.)
Over forty years, the authors have watched the bright optimism of a new, rigorous approach to economics–an optimism which they shared–dissolve into the failures of prediction and analysis which were seen in the global financial crisis of 2007–08. And it is the pervasive nature of radical uncertainty which is the source of the problem.
20 The Use and Misuse of Models
This chapter starts with a list of models that predicted imminent disaster (Malthus on population growth, Jevons on the finite supply of coal, Marion King Hubbert on “peak oil”.
What is going on here? The continued attention paid to these forecasters, and the popularity of their writings–which seems to continue even after it is obvious that they were wrong–reflects the common human predilection for apocalyptic narratives.
Kay and King acknowledge the value of these “Small-world” models in framing arguments but argue they are useless as forecasting tools.
Our experience, both of developing models ourselves and of observing the use of models in political and business contexts, is that models are rarely used as an input to the decision-making process; their purpose is to help justify an already determined course of action to corporate boards or government ministers or external regulators. Those who run financial institutions want to be reassured that their risks are controlled, and those who sponsor large transport projects want to ensure that such projects will go ahead.
Abuse of models
Having discussed a range of examples where they believe models are misused, Kay and King seek to enumerate the common deficiencies. .
First, the modelling exercise applies a common template to disparate situations….
Second, the modelling exercises rely on filling in gaps in knowledge by inventing numbers, often in immense quantities….
Third, these exercises necessarily assume, almost always without justification, stationarity of the underlying processes….
Fourth, in the absence of stationarity, these modelling exercises have no means of accounting for uncertainty and there is no basis for the construction of probability distributions, confidence intervals, or the use of tools of statistical inference. The opinions of different people about the values of a parameter, or the same consultant’s different estimates of the value of that parameter, do not constitute either a frequency or a probability distribution. With no defensible basis for describing such assessments of uncertainty, there is no mechanism for valuing the options associated with a project….
Fifth, because of the cost and complexity of the models, their deployment often prevents meaningful public consultation and debate…
All of these bogus models, as fragile as a balsa-wood structure in a wind tunnel, have a common failing. They begin by considering how you would make a decision if you had complete and perfect knowledge of the world, now and in the future. But very few of the relevant data are known. The solution? Make them all up.
Using models appropriately
We draw a number of lessons for the use of models in business and government. First, deploy simple models to identify the key factors that influence an assessment … The useful purpose of modelling is to find ‘small world’ problems which illuminate part of the large world of radical uncertainty.
Second, having identified the parameters which are likely to make a significant difference to an assessment, undertake research to obtain evidence on the value of these parameters….
Third, simple models provide a flexibility which makes it much easier to explore the effects of modifications and alternatives…
Fourth, under radical uncertainty, the options conferred by a policy may be crucial to its evaluation. Faced with a choice as to which of London’s two major airports, Gatwick or Heathrow, should be chosen for expansion, recognition that the topography of Gatwick allows piecemeal adaptation of the development of facilities in the light of uncertain future demand, while that of Heathrow does not, should be an important factor in the choice…
In the end, a model is useful only if the person using it understands that it does not represent ‘the world as it really is’, but is a tool for exploring ways in which a decision might or might not go wrong.
Part V: Living with Uncertainty
21 Practical Knowledge
Kay and King quote with approval Alfred Marshall’s (Principles of Economics, 1890) description of economics as “a study of mankind in the ordinary business of life”. They contrast this with the view espoused by the Chicago School of Economics
Gary Becker, who assumed intellectual leadership of the Chicago School on Milton Friedman’s retirement, took a different view: ‘The combined assumptions of maximizing behavior, market equilibrium, and stable preferences, used relentlessly and unflinchingly, form the heart of the economic approach.’
The assumptions of optimisation, equilibrium and stationarity are useful as part of small-world approaches to constructing models that throw light on a problem – the method used fruitfully by Smith, Ricardo, Tucker and Akerlof. But such models provide only partial insight into human behaviour in large worlds. Becker, however, had grander ambitions and indeed received a Nobel Prize ‘for having extended the domain of microeconomic analysis to a wide range of human behaviour and interaction’. But interdisciplinary cooperation is not the same as economic imperialism. In writing this book, we have consciously drawn from a wide corpus of knowledge and scholarship, and learnt much from doing so. People who know only economics do not know much about economics.
Models are tools
Models should not be judged by the sophistication of the mathematics–in itself neither good nor bad–but by the insights which that model provides into a particular problem that we are trying to solve.
Modern economics has lost a great deal in seeking to imitate Planck rather than Keynes. Axiomatic rationality is not evolutionary rationality, and critiques of the kind launched by behavioural economists fail to acknowledge the importance of the human ability to interpret problems in context, a skill which computers are still very far from achieving.
The map is not the territory
The relevant criticism of models in macroeconomics and finance is not that they are ‘wrong’ but that they have not proved useful in macroeconomics and have proved misleading in finance.
When we provide such a critique, we often hear another mantra to which many economists subscribe: ‘It takes a model to beat a model’. 18 On the contrary, we believe that it takes facts and observations to beat a model.
Economists and radical uncertainty
A mystery cannot be solved as a puzzle can. Reasoning through mysteries requires us to acknowledge ambiguities and to resolve them sufficiently to clarify our thinking. But even to frame a problem requires skill and judgement. That is one of the most important contributions that economists can make. A mystery must first be framed, well or badly, to aid people in reaching the decisions they have to make in conditions of radical uncertainty. Framing begins by identifying critical factors and assembling relevant data. It involves applying experience of how these factors have interacted in the past, and making an assessment of how they might interact in the future. The process of decision-making requires an understanding of the broader context within which a specific problem must be tackled, and most judgements will need to be communicated to others and will require the assistance of others in their implementation.
Kay and King argue that the role of the economist, like that of other social scientists, is to frame the economic and social issues that political and business leaders face when confronted by radical uncertainty.
The role of the practical economist, like that of the firefighter, the doctor, the dentist and the engineer, is to be a problem-solver. These other competent professionals–foxes, not hedgehogs–do not begin from a set of axioms or an overarching theory. A major part of the reason medicine was of little practical use before the twentieth century is that its practitioners did begin from theories which dominated medical thinking but contributed little to real understanding–most notoriously, the Greek physician Galen’s notion, from the second century ad, that illness was caused by imbalances between the humours. Modern scientific medicine has been built through the piecemeal accretion of knowledge about details, making full use of inductive, deductive and abductive reasoning, a process which is still contributing to an understanding of human anatomy and physiology.
Our knowledge of context and our ability to interpret it has been acquired over thousands of years. These capabilities are encoded in our genes, taught to us by our parents and teachers, enshrined in the social norms of our culture. It is probably for historical reasons, the legacy of those nineteenth-century utilitarian social reformers, that economists are focused more on equilibrium and optimisation than on evolution and adaptation. Indeed, economists have often assumed that these processes are essentially the same–that the maximisers would drive out the non-maximisers. And the underlying mathematics of adaptation and optimisation do have some similarities. But adaptation is not the same as optimisation. Adaptation is above all about survival. Survival involves finding not the best solution, but one that is good enough. And for survival the tails of distributions matter a lot. Especially, we think, in understanding financial crises.
22 Adapting to Radical Uncertainty
We should not give up modelling or the use of mathematics in making decisions; rather we should recognise what models can and cannot do to illuminate a problem, and accept that there is no standardised spreadsheet framework which can answer the question ‘What is going on here?’ Evidence-based policy has become policy-based evidence, eroding the confidence of the public in ‘evidence’. Political debates increasingly reflect an unseemly media battle between unsubstantiated assertions rather than a dispassionate argument of pros and cons, or a genuine attempt to find out ‘what is going on here’.
Humans are social animals
Humans are successful because, to a far greater extent than any other species, they communicate with each other. That makes it odd that economics should have put so much emphasis on optimising behaviour by independent individuals. This approach seems to omit not just an important part of what makes us human, but a central cause of our economic achievements.
In business, in finance, and in public policy, judgements are reached and decisions made in organisations–companies, banks, departments and agencies. All these institutions have multiple goals and the individuals and groups within them will typically have multiple goals of their own. Economists emphasise the role of incentives, and construct models to describe the problem of decentralising the objectives of an organisation to the individuals who will have to carry them out. But these models do not describe ‘the world as it really is’. The presumption that financial incentives are everywhere primary, and that the complex issues involved in delivering public services efficiently and fairly can be dealt with by the sticks of targets and the carrots of bonuses, has created as many problems as it has solved.
The importance of narrative
We live in a world of radical uncertainty in which our understanding of the present is imperfect, our understanding of the future even more limited, and in which no one person or organisation can hold the range of information required to arrive at the ‘best explanation’. Narrative reasoning is the most powerful mechanism available for organising our imperfect knowledge.
The financial institutions which failed in 2008, such as Lehman Brothers, combined pernicious narrative with misconceived incentive structures.
Collective intelligence and communicative rationality
The false assumption that good process leads to good outcome is pervasive in public sector organisations, where good often means lengthy, involves many people with little responsibility for the result, and is imbued with ill-defined concepts of fairness centred around issues of representativeness and statistical discrimination. Process has become the policy, with deleterious effects on outcomes.
23 Embracing Uncertainty
Risk and uncertainty
The distinction between problems such as the unpredictable outcome of games of chance, which could be represented probabilistically, and radical uncertainty, which could not, was understood by both Keynes and Knight. After the Second World War Friedman denied the existence of such a distinction and the two concepts were elided not just in the field of economics, but in a much wider arena in which decision theory and Bayesian reasoning held sway.
We could attach probabilities to every conceivable event. Whatever might happen in the future could be spelt out as a set of mutually exclusive alternatives, and probabilities could be attached to each of those events. The implication was that risk was something that could be priced. Once it had been priced, risk was tamed. In effect, uncertainty was suppressed by markets.
In finance theory, risk is defined as a spread around a known average return. The lower the spread the less the risk. Since people are assumed to be ‘risk-averse’, they must be offered an incentive to hold assets that are risky in this sense. And so there is a trade-off between risk and return. Risk-averse individuals will be content to assume risk provided there is adequate compensation. Risk is no longer feared because it is priced and has been accepted in return for its price….
But this small world is not ours or yours. In everyday language, the words ‘risk’ and ‘uncertainty’ are used in different senses and much confusion has been generated both by the multiplicity of interpretations and by the attribution of technical meanings which do not correspond to the ordinary usage of these terms. We have defined risk as the failure of a reference narrative to be realised.
Robustness and resilience
Good strategies for a radically uncertain world avoid the pretence of knowledge–the models and bogus quantification which require users to make up things they do not know and could not know.
There is a strand of thought which associates the extraordinary innovative capacity of the US economy with this reluctance of its governments to provide the social insurance against risk–in the broadest sense–which is taken for granted in Denmark, and most European countries.
Uncertainty and evolution
Entrepreneurship and radical uncertainty
Knight’s insight–that it is radical uncertainty which gives opportunity for entrepreneurship–is fundamental to an understanding of social, technological and economic progress. Through evolutionary processes–biological, institutional, political, market-driven–entrepreneurship drives us forward. Not just in business but in scholarship, practical knowledge, the arts, and many other areas of life.
Appendix: Axioms of Choice under Uncertainty
… radical uncertainty precluded the formation of subjective probabilities for all possible states of the world, a prerequisite for the theory that people make decisions in a world of uncertainty by maximising their expected utility.
In this appendix we provide a brief explanation of why the economics profession was wrong to accept too readily the axioms of choice under uncertainty in order to justify the assumption of expected utility maximisation.
There are evident similarities between the Hicks–Samuelson axioms of consumer choice under certainty and the von Neumann–Morgenstern axioms (as modified by Savage) of behaviour under uncertainty. The description of these approaches as ‘utility maximisation’ and ‘expected utility maximisation’ further emphasise the resemblance. But the analysis of consumer behaviour is distinct from the analysis of decision-making under uncertainty, and the apparent equivalence is the product of the long tradition in economics of using the word ‘utility’ and the modern emphasis on the term ‘rational’ not only to refer to a number of different things but also in ways which may not necessarily reflect ordinary usage.
In his famous treatise, Savage showed that, provided people conformed to certain axioms which he described as constituting ‘rational behaviour’ under uncertainty, there existed numbers that could be interpreted as subjective probabilities, and that ‘rational behaviour’ was equivalent to maximising expected utility calculated using those subjective probabilities.
The extension of the completeness axiom to lotteries with objective probabilities is itself problematic. Faced with the choice between a large gamble on the National Lottery and a smaller one on the spin of the wheel at Atlantic City, our answer is that we do not have the slightest interest in either proposition.
But if there are reasons for reservations in extending the axiom of completeness to lotteries characterised by objectively defined and quantifiable risks, they are multiplied many times when the axiom is applied to situations characterised by subjective probabilities. If there are possibilities of which we cannot conceive then we cannot attach probabilities to them and completeness is simply inconsistent with radical uncertainty.
The axiom of independence is perhaps the most interesting because it gave rise to controversy almost from the outset.