Nice podcast from Odd Lots discussing the Wirecard fraud. Lots of insights but my favourite is to be wary when you see a financial services company exhibit high growth while maintaining profitability.
There may be exceptions to the rule but that is not how the financial services market normally works.
Shout out to Tim Harford for this introduction to the study of how, in his words, ignorance can be deliberately produced. The technical term “agnatology” is I suspect unlikely to catch on but the underlying message is one worth understanding. At a minimum it is a handy addition to your Scrabble dictionary.
The article was originally published in March 2017 but I only came across it recently via this podcast interview Harford did with Cardiff Garcia on “The New Bazaar”. The context in 2017 was the successful campaign for the US presidency that Donald Trump ran during 2016 with a bit of Brexit thrown in but this is a challenge that is not going away anytime soon.
Harford notes that it is tempting to think that the answer to the challenge posed by what has come to be known as a post truth society lies in a better process to establish the facts
The instinctive reaction from those of us who still care about the truth — journalists, academics and many ordinary citizens — has been to double down on the facts.
He affirms the need to have some agreement on how we distinguish facts from opinions and assertions but he cautions that this is unlikely to solve the problem. He cites the tobacco industry response to the early evidence that smoking causes cancer to illustrate why facts alone are not enough.
A good place to start is by delving into why facts alone are not enough – a few extracts from the article hopefully capture the main lessons
Doubt is usually not hard to produce, and facts alone aren’t enough to dispel it. We should have learnt this lesson already; now we’re going to have to learn it all over again…
Tempting as it is to fight lies with facts, there are three problems with that strategy…
The first is that a simple untruth can beat off a complicated set of facts simply by being easier to understand and remember. When doubt prevails, people will often end up believing whatever sticks in the mind…
There’s a second reason why facts don’t seem to have the traction that one might hope. Facts can be boring. The world is full of things to pay attention to, from reality TV to your argumentative children, from a friend’s Instagram to a tax bill. Why bother with anything so tedious as facts?…
In the war of ideas, boredom and distraction are powerful weapons. The endgame of these distractions is that matters of vital importance become too boring to bother reporting…
There’s a final problem with trying to persuade people by giving them facts: the truth can feel threatening, and threatening people tends to backfire. “People respond in the opposite direction,” says Jason Reifler, a political scientist at Exeter University. This “backfire effect” is now the focus of several researchers, including Reifler and his colleague Brendan Nyhan of Dartmouth…
The problem here is that while we like to think of ourselves as rational beings, our rationality didn’t just evolve to solve practical problems, such as building an elephant trap, but to navigate social situations. We need to keep others on our side. Practical reasoning is often less about figuring out what’s true, and more about staying in the right tribe…
We see what we want to see — and we reject the facts that threaten our sense of who we are…
When we reach the conclusion that we want to reach, we’re engaging in “motivated reasoning”…
Even in a debate polluted by motivated reasoning, one might expect that facts will help. Not necessarily: when we hear facts that challenge us, we selectively amplify what suits us, ignore what does not, and reinterpret whatever we can. More facts mean more grist to the motivated reasoning mill. The French dramatist Molière once wrote: “A learned fool is more foolish than an ignorant one.” Modern social science agrees…
When people are seeking the truth, facts help. But when people are selectively reasoning about their political identity, the facts can backfire.
So what are we to do?
Harford cites a study that explores the value of scientific curiosity
What Kahan and his colleagues found, to their surprise, was that while politically motivated reasoning trumps scientific knowledge, “politically motivated reasoning . . . appears to be negated by science curiosity”. Scientifically literate people, remember, were more likely to be polarised in their answers to politically charged scientific questions. But scientifically curious people were not. Curiosity brought people together in a way that mere facts did not. The researchers muse that curious people have an extra reason to seek out the facts: “To experience the pleasure of contemplating surprising insights into how the world works.”
It is of course entirely possible that Tim Harford’s assessment is just calling to my own bias. I will admit that one the things that I always looked for when hiring, or working, with people was curiosity. These people are surprisingly rare but (IMHO) worth their weight in gold. An intellectually curious mind makes up for a lot of other areas where the person might not be perfect in terms of skills or experience. The general point (I think) also ties to the often cited problem that people with lots of knowledge can sometimes be prone to not being so street smart. Nassim Taleb makes this argument in nearly everything he writes.
So Tim Harford might not be offering the entire answer but I think his article is worth reading on two counts
Firstly as a cautionary tale against expecting that all debates and disputes can be resolved by simply establishing the “facts”
Secondly as a reminder of the power of a curious mind and the value of the never-ending search for “what am I missing?”
Matt Levine’s Money Stuff column (Bloomberg Opinion) had a great piece today which, while nominally focussed on the enduring question of “Looking for Tether’s Money”, is worth reading for the neat summary he offers of how finance turns risky assets into safe assets. The column is behind a paywall but you can access it for free by signing up for his daily newsletter.
This particular piece of the magic of finance is of course achieved by dividing up claims on risky assets into tranches with differing levels of seniority. In Matt’s words…
Most of what happens in finance is some form of this move. And the reason for that is basically that some people want to own safe things, because they have money that they don’t want to lose, and other people want to own risky things, because they have money that they want to turn into more money. If you have something that is moderately risky, someone will buy it, but if you slice it into things that are super safe and things that are super risky, more people might buy them. Financial theory suggests that this is impossible but virtually all of financial practice disagrees.
Money Stuff, Matt Levine Bloomberg, 7 October 2021
Matt also offers a neat description of how this works in banking
A bank makes a bunch of loans in exchange for senior claims on businesses, houses, etc. Then it pools those loans together on its balance sheet and issues a bunch of different claims on them. The most senior claims, classically, are “bank deposits”; the most junior claims are “equity” or “capital.” Some people want to own a bank; they think that First Bank of X is good at running its business and will grow its assets and improve its margins and its stock will be worth more in the future, so they buy equity (shares of stock) of the bank. Other people, though, just want to keep their money safe; they put their deposits in the First Bank of X because they are confident that a dollar deposited in an account there will always be worth a dollar.
The fundamental reason for this confidence is that bank deposits are senior claims (deposits) on a pool of senior claims (loans) on a diversified set of good assets (businesses, houses). (In modern banking there are other reasons — deposit insurance, etc. — but this is the fundamental reason.) But notice that this is magic: At one end of the process you have risky businesses, at the other end of the process you have perfectly safe dollars. Again, this is due in part to deposit insurance and regulation and lenders of last resort, but it is due mainly to the magic of composing senior claims on senior claims. You use seniority to turn risky things into safe things
He then applies these principles to the alternative financial world that has been created around crypto assets to explore how the same factors drive both the need/demand for stablecoins and the ways in which crypto finance can meet the demand for safe assets (well “safer” at least).
The one part of his explanation I would take issue with is that he could have delved deeper into the question of whether crypto users require stablecoins to exhibit the same level of risk free exchangeability that we expect of bank deposits in the conventional financial world.
Matt writes…
The people who live in Bitcoin world are people like anyone else. Some of them (quite a lot of them by all accounts) want lots of risk: They are there to gamble; their goal is to increase their money as much as possible. Bitcoin is volatile, but levered Bitcoin is even more volatile, and volatility is what they want.
Others want no risk. They want to put their money into a thing worth a dollar, and be sure that no matter what they’ll get their dollar back. But they don’t want to do that in a bank account or whatever, because they want their dollar to live in crypto world. What they want is a “stablecoin”: A thing that lives on the blockchain, is easily exchangeable for Bitcoin (or other crypto assets) using the tools and exchanges and brokerages and processes of crypto world, but is always worth a dollar
The label “stable” is a relative term so it is not obvious to me that people operating in the crypto financial asset world all necessarily want the absolute certainty of a coin that always trade at par value to the underlying fiat currency. Maybe they do but maybe some are happy with something that is stable enough to do the job of allowing them to do the exchanges they want to do in risky crypto assets. Certainly they already face other costs like gas fees when they trade so maybe something that trades within an acceptable range of par value is good enough?
What it comes down to is first defining exactly what kind of promise the stablecoin backer is making before we start down the path of defining exactly how that promise should be regulated. I do think that the future of stablecoins is likely to be more regulated and that is likely to be a net positive outcome. The term “stablecoin” however encompasses a wide variety of structures and intended uses. The right kind of regulation will be designed with these differences in mind. That said, some of the stablecoin issuers have not done themselves any favours in the loose ways in which they have defined their promise.
Matt’s column is well worth reading if you can access it but the brief outline above flags some of the key ideas and the issues that I took away. The ways in which seniority in the loss hierarchy creates safety (or what Gary Gorton refers to as “information insensitivity”) is I think the key insight. I frequently encounter papers and articles discussing the role of bank deposits as the primary form of money in developed economies. These nearly always mention prudential regulation, supervision and deposit insurance but the role of deposit preference is often overlooked. For anyone looking to dig a bit deeper, I did a post here offering an Australian perspective on how this works.
Nic Carter and Linda Jeng have produced a useful paper titled “DeFi Protocol Risks: the Paradox of DeFi” that explores the risks that DeFi will need to address and navigate if it is to deliver on the promises that they believe it can. There is of course plenty of scepticism about the potential for blockchain and DeFi to change the future of finance (including from me). What makes this paper interesting is that it is written by two people involved in trying to make the systems work as opposed to simply throwing rocks from the sidelines.
Linda Jeng has a regulatory back ground but is currently the Global Head of Policy at Transparent Financial Systems. Nic is a General Partner at a seed-stage venture capital film that invests in blockchain related businesses. The paper they have written will contribute a chapter to a book being edited by Bill Coen (former Secretary General of the Basel Committee on Banking Supervision) and Diane Maurice to be titled “Regtech, Suptech and Beyond: Innovation and Technology in Financial Services” (RiskBooks).
Linda and Nic conceptually bucket DeFi risks into five categories:
interconnections with the traditional financial system,
operational risks stemming from underlying blockchains,
smart contract-based vulnerabilities,
other governance and regulatory risks, and
scalability challenges.
… and map out the relationships in this schematic
DeFi protocols: Map of interconnected risks
Source: DeFi Protocol Risks: The Paradox of DeFi by Nic Carter and Linda Jeng (Figure 1, Page 7)
Conclusion: “No Free Lunch”
The paper concludes around the long standing principle firmly entrenched in the traditional financial world – there is “no free lunch”. Risk can be transformed but it is very hard to eliminate completely. Expressed another way, there is an inherent trade off in any system between efficiency and resilience.
Many of the things that make DeFi low cost and innovative also create operational risk and other challenges. Smart contracts sound cool, but when you frame them as “automated, hard-to-intervene contracts” it is easy to see they can also amplify risks. Scalability is identified as an especially hard problem if you are not willing to compromise on the principles that underpinned the original DeFI vision.
The paper is worth a read but if you are time poor then you can also read a short version via this post on Linda Jeng’s blog. Izabella Kaminska (FT Alphaville) also wrote about the paper here.
Anyone with more than a casual interest in business will be familiar with the increased focus on Environmental, Social and Governance (ESG) issues. There are sound arguments being made on both sides of the debate but I will admit upfront that I approach the topic with a somewhat ESG positive bias. Given my bias, it is all the more important to pay attention to what the sceptics are calling out rather than looking for affirmation amongst the true believers.
A post by Aswath Damodaran titled “Sounding good or Doing good? A Skeptical Look at ESG” is one of the better contributions to the ESG debate that I have encountered. I discussed one of his earlier contributions to the debate here and it is clear that he is not a fan of ESG. I am still working through his arguments but I like the analytical framework he employs and the way in which he supports his arguments with evidence.
I intend to do a couple of posts digging down into the ESG debate using Damodaran’s post and few other sources but want to start by laying out his arguments with some very limited comments.
Damodaran starts by framing ESG as part of a tradition of business ideas that have tended to prove to be more noise than substance, describing the ESG “sales pitch” as follows
“Companies that improve their social goodness standing will not only become more profitable and valuable over time, we are told, but they will also advance society’s best interests, thus resolving one of the fundamental conflicts of private enterprise, while also enriching investors”
There is no doubt that ESG, like many other business ideas, is prone to being over-hyped. There is room to take issue with the question of whether this is a fair description of the ESG movement as a whole. My gut feel is that presenting the “sales pitch” version is not representative of ESG advocates who genuinely believe that ESG can address problems in the ways the market currently operate, but it will be more productive to focus on the specific weaknesses that Damodaran discusses.
Damodaran starts with the problem of measurement
“Any attempts to measure environment and social goodness face two challenges.
– The first is that much of social impact is qualitative, and developing a numerical value for that impact is difficult to do.
– The second is even trickier, which is that there is little consensus on what social impacts to measure, and the weights to assign to them.”
Assuming the measurement issues can be resolved, the second problem is identifying exactly how incorporating ESG factors into the business model or strategy contributes to improving the value of a company. Damodaran uses the following generic model of value drivers to explore this question
Figure 1: The Drivers of Value
Using this framework, Damodaran identifies two ways in which a company can derive benefits from incorporating ESG principles into its business strategy
Goodness is rewarded – i.e. companies behave in a socially responsible way because it creates positive outcomes for their business
Badness is punished – i.e. companies behave in a socially responsible way because bad behaviour is punished
Damodaran also identifies a third scenario in which “The bad guys win”
“In this scenario, bad companies mouth platitudes about social responsibility and environmental consciousness without taking any real action, but customers buy their products and services, either because they are cheaper or because of convenience, employees continue to work for them because they can earn more at these companies or have no options, and investors buy their shares because they deliver higher profits. As a result, bad companies may score low on corporate responsibility scales, but they will score high on profitability and stock price performance.”
Damodaran argues that the evidence supports the following conclusions:
A weak link to profitability
“There are meta studies (summaries of all other studies) that summarize hundreds of ESG research papers, and find a small positive link between ESG and profitability, but one that is very sensitive to how profits are measured and over what period, leading one of these studies to conclude that “citizens looking for solutions from any quarter to cure society’s pressing ills ought not appeal to financial returns alone to mobilize corporate involvement”. Breaking down ESG into its component parts, some studies find that environment (E) offered the strongest positive link to performance and social (S) the weakest, with governance (G) falling in the middle.”
2) A stronger link to funding costs
“Studies of “sin” stocks, i.e., companies involved in businesses such as producing alcohol, tobacco, and gaming, find that these stocks are less commonly held by institutions, and that they face higher costs for funding, from equity and debt). The evidence for this is strongest in sectors like tobacco (starting in the 1990s) and fossil fuels (especially in the last decade), but these findings come with a troubling catch. While these companies face higher costs, and have lower value, investors in these companies will generate higher returns from holding these stocks.”
3) Some evidence that ESG focussed companies do reduce their risk of failure or exposure to disaster risk
“An alternate reason why companies would want to be “good” is that “bad” companies are exposed to disaster risks, where a combination of missteps by the company, luck, and a failure to build in enough protective controls (because they cost too much) can cause a disaster, either in human or financial terms. That disaster can not only cause substantial losses for the company, but the collateral reputation damage created can have long term consequences. One study created a value-weighted portfolio of controversial firms that had a history of violating ESG rules, and reported negative excess returns of 3.5% on this portfolio, even after controlling for risk, industry, and company characteristics. The conclusion in this study was that these lower excess returns are evidence that being socially irresponsible is costly for firms, and that markets do not fully incorporate the consequences of bad corporate behavior. The push back from skeptics is that not all firms that behave badly get embroiled in controversy, and it is possible that looking at just firms that are controversial creates a selection bias that explains the negative returns.”
Damodaran sums up his argument
“There is a weak link between ESG and operating performance (growth and profitability), and while some firms benefit from being good, many do not. Telling firms that being socially responsible will deliver higher growth, profits and value is false advertising. The evidence is stronger that bad firms get punished, either with higher funding costs or with a greater incidence of disasters and shocks. ESG advocates are on much stronger ground telling companies not to be bad, than telling companies to be good. In short, expensive gestures by publicly traded companies to make themselves look “good” are futile, both in terms of improving performance and delivering returns.”
There is a lot more to say on this topic. The evidence that certain types of companies do get punished for failing to be socially responsible is especially interesting. I see a fair degree of cynicism applied to the ESG stance adopted by the Australia banks but I suspect they are a good example of the type of company that will in fact benefit from making real investments in socially responsible business strategies.
That I should be asking the question is even more curious given the years I spent working with economic capital but there was something in the ECB position that I was not comfortable with. There is nothing particularly wrong in the ways that the ECB envisages that an economic perspective can add value to a bank’s ICAAP. The problem (for me), I came to realise, is more the lack of emphasis on recognising the fundamental limitations of economic models. In short, my concern is that the detailed focus on risk potentially comes at the expense of an equally useful consideration of the ways in which a bank is subject to radical uncertainty.
The rest of this post offers an overview of what the ECB survey observed and some thoughts on the value of explicitly incorporating radical uncertainty into an ICAAP.
The ECB report sample set
The ECB report, based on a survey of 37 significant institutions it supervises, assesses the extent to which these organisations were complying (as at April 2019) with ECB expectations for how the ICAAP should be constructed and executed. The selected sample focuses on the larger (and presumably more sophisticated) banks, including all global systematically important banks supervised by the ECB. I am straying outside my area of expertise (Australian bank capital management) in this post but there is always something to learn from considering another perspective.
The ECB assessment on ICAAP practices
The ECB notes that progress has been made in some areas of the ICAAP. In particular; all banks in the survey have risk identification processes in place, they produce summary documents (“Capital Adequacy Statements” in ECB parlance) that enable bank management (not just the technical specialists) to engage with and take responsibility for the capital strength of their bank and the sample banks do incorporate stress testing into their capital planning process.
The ECB believes however that there is still a lot of room for improvement. The general area of concern is that the banks it supervises are still not paying sufficient attention to the question of business continuity. The ECB cites three key areas as being particularly in need of improvement if the ICAAPs are to play their assigned role in effectively contributing to a bank’s continuity:
Data quality
The application of the “Economic Perspective” in the ICAAP
Stress testing
The value of building the ICAAP on sound data and testing the outcomes of the process under a variety of severe stress scenarios is I think uncontentious.
The value the economic perspective contributes is less black and white. Like many thing in life, the challenge is to get the balance right. My perspective is that economic models are quite useful but they are far from a complete answer and dangerous when they create an illusion of knowledge, certainty and control.
“Under this perspective, the institution’s assessment is expected to cover the full universe of risks that may have a material impact on its capital position from an economic perspective. In order to capture the undisguised economic situation, this perspective is not based on accounting or regulatory provisions. Rather, it should take into account economic value considerations for all economically relevant aspects, including assets, liabilities and risks. …. The institution is expected to manage economic risks and assess them as part of its stress-testing framework and its monitoring and management of capital adequacy”
ECB Guide to the internal capital adequacy assessment process (ICAAP) – Principles, November 2018 (Paragraph 49 / pages 18-19)
So far so good – the key points seem (to me) to be quite fair as statements of principle.
The ECB sees value in looking beyond the accounting and regulatory measures that drive the reported capital ratios (the “normative perspective” in ECB terminology) and wants banks to consider “the full universe of risks that may have a material impact on its capital position”. The ECB Report also emphasises the importance of thinking about capital from a “business continuity” perspective and cites the “… unjustified inclusions of certain capital components (e.g. minority interests, Additional Tier 1 … or Tier 2 … instruments) … which can inflate the internal capital figures” as evidence of banks failing to meet this expectation. Again a fair point in my view.
These are all worthy objectives but I wonder
firstly about the capacity of economic capital models to reliably deliver the kinds of insights the ECB expects and
secondly whether there are more cost effective ways to achieve similar outcomes.
The value of a different perspective
As a statement of principle, the value of bringing a different perspective to bear clearly has value. The examples that the ECB cites for ways in which the economic perspective can inform and enhance the normative perspective are all perfectly valid and potentially useful. My concern is that the ECB seems to be pursuing an ideal state in which an ICAAP can, with sufficient commitment and resources, achieve a degree of knowledge that enables a bank to control its future.
Business continuity is ultimately founded on a recognition that there are limits to what we can know about the future and I side with the risk philosophy that no amount of analysis will fundamentally change this.
The ECB’s economic perspective does not neccesarily capture radical uncertainty
I have touched on the general topic of uncertainty and what it means for the ICAAP a couple of times in this blog. The ECB report mentions “uncertainty” twice; once in the context of assessing climate change risk
Given the uncertainty surrounding the timing of climate change and its negative consequences, as well as the potentially far-reaching impact in breadth and magnitude along several transmission channels via which climate-related risks may impact banks’ capital adequacy, it is rather concerning that almost one-third of the banks has not even considered these risks in their risk identification processes at all.
Page 39
… and then in the context of making allowances for data quality
However, … in an internal deep dive on risk quantification in 2019, half of the risk quantifications showed material deficiencies. This finding is exacerbated by the data quality issues generally observed and moreover by the fact that one-half of the banks does not systematically ensure that the uncertainty surrounding the accuracy of risk quantifications (model risk) is appropriately addressed by an increased level of conservatism.
Page 54
This is not a question of whether we should expect that banks can demonstrate that they are thinking about climate change and making allowances for model risk along with a host of other plausible sources of adverse outcomes. It is a surprise that any relatively large and sophisticated banks might be found wanting in the ways in which these risks are being assessed and the ECB is right to call that out.
However, it is equally surprising (for me at least) that the ECB did not seem to see value in systematically exploring the extent to which the ICAAPs of the banks it supervises deal with the potential for radical uncertainty.
Business continuity is far more likely if banks can also demonstrate that they recognise the limits of what they can know about the future and actively plan to deal with being surprised by the unexpected. In short one of the key ICAAP practices I would be looking for is evidence that banks have explicitly made allowances for the potential for their capital plan to have to navigate and absorb “unknown unknowns”.
I have covered some of the ideas in the book in previous posts (here and here) but have now had the chance the read the book in full and can recommend it. I have included more detailed notes on the book here but this post offers a short introduction to some of the key ideas.
Kay and King cover a lot of ground but, simply put, their book is about
“… how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.”
One of the things that makes the book interesting is that they were once true believers in decision making models based on rational economic agents seeking to maximise or optimise expected value.
As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses, and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy-makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.
The limits on their ability to optimise were represented by constraints: the relationship between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ description of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to those problems.
Kay and King are not saying that these models are useless. They continue to see some value in the utility maximisation model but have come to believe that it is not the complete answer that many economists, finance academics and politicians came to believe.
Although much can be learnt by thinking in this way, our own practical experience was that none of these economic actors were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action.
They argue that the approach works up to a point but fails to deal with decisions that are in the domain of radical uncertainty
But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.
There are many factors that explain the current state of affairs but a key inflexion point in Kay and King’s account can be found in what they label “A Forgotten Dispute” (Chapter 5) between Frank Knight and John Maynard Keynes on one side and Frank Ramsey and Bruno de Frinetti on the other, regarding the distinction between risk and uncertainty. Knight and Keynes argued that probability is an objective concept confined to problems with a defined and knowable frequency distribution. Ramsey argued that “subjective probability” is equally valid and used the mathematics developed for the analysis of frequency based probabilities to apply these subjective probabilities.
“Economists (used to) distinguish risk, by which they meant unknowns which could be described with probabilities, from uncertainty, which could not….. over the last century economists have attempted to elide that historic distinction between risk and uncertainty, and to apply probabilities to every instance of our imperfect knowledge of the future.”
Keynes and Knight lost the debate
Ramsey and de Finetti won, and Keynes and Knight lost, that historic battle of ideas over the nature of uncertainty. The result was that the concept of radical uncertainty virtually disappeared from the mainstream of economics for more than half a century. The use of subjective probabilities, and the associated mathematics, seemed to turn the mysteries of radical uncertainty into puzzles with calculable solutions.
Ramsey and de Finetti laid the foundations for economists to expand the application of probability based thinking and decision making. Milton Friedman picked up the baton and ran with it.
There is a lot more to the book than interesting historical anecdotes on the history of economic ideas. The subject matter is rich and it crosses over topics covered previously in this blog including:
There are also overlaps with a book by Richard Bookstaber titled “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. I am yet to review this book but have some detailed notes here.
One quibble with the book is that I think their critique of the Bayesian method is a bit harsh. I understand their concern to push back on the idea that Bayes solves the problem of using probability to understand uncertainty. At times however it reads like Bayes has no value at all. Read “The Theory that Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne for an alternative perspective.
Bayes may not help with mysteries but its application in puzzles should not be undervalued. I don’t entirely agree with their perspective on behavioural finance either.
I want to come back to the topics of risk and uncertainty in a future post but it will take time to process all of the overlapping pieces. In the interim, I hope you found the overview above useful.
‘In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.’
Barack Obama commenting on making the decision whether to attack a target which evidence suggested could be Osama Bin Laden
This quote is a drawn from an article that John Kay published on his website under the title “The point of probabilities”. The point he is making is
Similar to one touched on in a Bank Underground post that I discussed in a recent post on my blog.