Red flags in financial services

Nice podcast from Odd Lots discussing the Wirecard fraud. Lots of insights but my favourite is to be wary when you see a financial services company exhibit high growth while maintaining profitability.

There may be exceptions to the rule but that is not how the financial services market normally works.

podcasts.apple.com/au/podcast/odd-lots/id1056200096

Tony — From the Outside

What Michael Lewis loves about experts

This link takes you to the last of a 7 episode podcast Michael Lewis has done on the role of experts

podcasts.apple.com/au/podcast/against-the-rules-with-michael-lewis/id1455379351

The whole series is highly recommended but I especially like this quote in which he attempts to sum up the series

“Life eventually humbles us all. What I love about experts, the best of them anyway, is that they get to their humility early, they have to, it’s part of who they are, it’s necessary for what they are doing. They set out to get to the bottom of something that has no bottom, and so they are reminded, constantly, of what they don’t know. They move through the world focussed not on what they know but on what they might find out.”

In that spirit, let me know what I am missing

Tony – From the Outside

Constructive dissent

I am currently reading “Thinking in Bets” by Annie Duke. It is early days but I suspect that this is a book that has some useful things to say about creating the kinds of corporate culture that truely reflect the values espoused in corporate mission statements. It is a truth that actions speak louder than words and she cites a practice employed by the American Foreign Service Association which has not one but four awards for employees who have exhibited behaviours that demonstrate initiative, integrity, intellectual courage and constructive dissent.

The attached quote comes from the AFSA website setting out the criteria employed for these awards

Criteria for the Dissent Awards

The awards are for Foreign Service employees who have “exhibited extraordinary accomplishment involving initiative, integrity, intellectual courage and constructive dissent”. The awards publicly recognize individuals who have demonstrated the intellectual courage to challenge the system from within, to question the status quo and take a stand, no matter the sensitivity of the issue or the consequences of their actions. The issue does not have to be related to foreign policy. It can involve a management issue, consular policy, or, in the case of the recently established F. Allen “Tex” Harris Award, the willingness of a Foreign Service Specialist to take an unpopular stand, to go out on a limb, or to stick his/her neck out in a way that involves some risk

https://www.afsa.org/constructive-dissent-awards

When safety proves dangerous …

… is the title of a post on the Farnham Street blog that provides a useful reminder of the problem of “risk compensation”; i.e. the way in which measures designed to make us safer can be a perverse prompt for us to take more risk because we feel safer. I want to explore how these ideas apply to bank capital requirements but will first outline the basic ideas covered by Farnham Street.

we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation.

The classic example is car safety measures such as improved tyres, ABS braking systems, seat belts and crumple zones designed to protect the driver and passengers. These have helped reduce car fatality rates for the people inside the car but not necessarily reduced accident rates given that drivers tend to drive faster and more aggressively because they can. Pedestrians are also at greater risk.

Farnham Street suggests the following lessons for dealing with the problem risk compensation:

  1. Safety measures are likely to be more effective is they are less visible
  2. Measures designed to promote prudent behaviour are likely to be more effective than measures which make risky behaviour safer
  3. Recognise that sometimes it is better to do nothing if the actions we take just leads to an offset in risk behaviour somewhere else
  4. If we do make changes then recognise that we may have to put in place other rules to ensure the offsetting risk compensating behaviour is controlled
  5. Finally (and a variation on #3), recognise that making people feel less safe can actually lead to safer behaviour.

If you are interested in this topic then I can also recommend Greg Ip’s book “Foolproof” which offers a good overview of the problem of risk compensation.

Applying these principles to bank capital requirements

The one area where I would take issue with the Farnham Street post is where it argues that bailouts and other protective mechanisms contributed to scale of the 2008 financial crisis because they led banks to take greater risks. There is no question that the scale of the crisis was amplified by the risks that banks took but it is less obvious to me that the bailouts created this problem.

The bailouts were a response to the problem that banks were too big to fail but I can’t see how they created this problem; especially given that the build up of risk preceded the bailouts. Bailouts were a response to the fact that the conventional bankruptcy and restructure process employed to deal with the failure of non-financial firms simply did not work for financial firms.

It is often asserted that bankers took risks because they expected that they would be bailed out; i.e/ that banks deliberately and consciously took risk on the basis that they would be bailed out. I can’t speak for banks as a whole but I have never witnessed that belief in the four decades that I worked in the Australian banking system. Never attribute to malice what can be equally explained by mistaken beliefs. I did see bankers placing excessive faith in the economic capital models that told them they could safely operate with reduced levels of capital. That illusion of knowledge and control is however a different problem altogether, largely to do with not properly understanding the distinction between risk and uncertainty (see here and here).

If I am right, that would suggest that making banks hold more capital might initially make them safer but might also lead to banks looking for ways to take more risk. This is a key reason why I think the answer to safer banks is not just making them hold higher and higher levels of common equity. More common equity is definitely a big part of the answer but one of the real innovations of Basel 3 was the development of new forms of loss absorbing capital that allow banks to be recapitalised by bail-in rather than bail-out.

If you want to go down the common equity is the only solution path then it will be important to ensure that Farnham Street Rule #4 above is respected; i.e. bank supervisors will need to ensure that banks do not simply end up taking risks in places that regulation or supervision does not cover. This is not a set and forget strategy based on the idea that increased “skin in the game” will automatically lead to better risk management.

Based on my experience, the risk of common equity ownership being diluted by the conversion of this “bail-in” capital is a far more effective constraint on risk taking than simply requiring banks to hold very large amounts of common equity. I think the Australian banking system has this balance about right. The Common Equity Tier 1 requirement is calibrated to a level intended to make banks “Unquestionably Strong”. Stress testing suggest that this level of capital is likely to be more than sufficient for well managed banks operating with sensible risk appetites but banks (the larger ones in particular) are also required to maintain a supplementary pool of capital that can be converted to common equity should it be required. The risk that this might be converted into a new pool of dilutive equity is a powerful incentive to not push the boundaries of risk appetite.

Tony – From the Outside

Navigating a radically uncertain world

The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.

This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.

No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.

It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.

Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.

However, there were two things I found especially interesting in their analysis.

  • One was the argument that the “biases” catalogued by behavioural finance were not necessarily irrational when applied to a radically uncertain world.
  • The other was the emphasis they place on the idea of employing abductive reasoning and reference narratives to help navigate this radically uncertain future.

Behavioural Finance

Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.

Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…

…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.

I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.

Abductive Reasoning and Reference Narratives

The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.

If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.

The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.

Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Reference Narratives

If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.

Good and Bad Reference Narratives

Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.

Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.

Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,

…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.

Using Reference Narratives

Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).

Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..

Scenario planning is a way of ordering thoughts about the future, not of predicting it.

The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.

Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future

We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.

Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.

‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’

Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives

These extracts from their book nicely captures the essence of their argument

Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.

the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.

Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.


The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.

robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.

Summing Up

Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.

I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.

Tony – From The Outside

Why we fail to prepare for disasters

Tim Harford (The Undercover Economist) offers a short and readable account here of some of the reasons why, faced with clear risks, we still fail to act. We can see the problem, typically one of many, but don’t do enough to manage or mitigate the risk. New Orleans’ experiences with severe weather events features prominently as does (not surprisingly) COVID 19.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

Why we fail to prepare for disasters; Tim Harford (The Undercover Economist)

Another big part of the problem is that the cost of being fully prepared can be more than we are willing to pay. Especially when there is continuous pressure to find cost economies in the here and now

Serious scenarios are useful, but … no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

So the key points I took from his post:

  • Sometimes it can be something genuinely new and unexpected (i.e. Black Swan events) but risks we are well aware of can be equally damaging
  • Part of the problem is that we are social animals and take our cues from what the rest of the herd is doing (“normalcy bias” or “negative panic”)
  • Even where we understand the statistics and know that someone will be impacted, we tend to assume it will be someone else or someone else’s family (“optimism bias”)
  • We are especially bad at understanding risks that have an exponential driver (“exponential myopia”)
  • We are also quite good at finding reasons to justify ignoring risks we want to ignore or otherwise find inconvenient (“wishful thinking”)
  • Last, but far from least, efficiency is the enemy of resilience.

We need to remember that most of the factors listed above can also be useful in many other contexts (arguably most of the time). A tendency not to panic can be pretty useful and optimism has helped dreamers and ordinary people achieve many great things that have benefited the herd. Efficiency as a rule seems like a good thing to strive for.

Harford does not offer any easy answers but his post touches on issues that I have also been considering in Kay and King’s book titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. I have done a couple of posts on that book already (here and here) and am working on a final one that focuses on Chapters 8-16 which set out their ideas for how we navigate a world prone to radical uncertainty.

Tony – From the Outside

The why of Radical Uncertainty

A recent post offered an overview of a book by John Kay and Mervyn King titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. It is a rich topic and this post covers the underlying drivers that tend to result in radically uncertain outcomes.

Kay and King nominate “reflexivity” as a key driver of radical uncertainty

The sociologist Robert K. Merton identified reflexivity as a distinctive property of social systems–the system itself is influenced by our beliefs about it. The idea of reflexivity was developed by the Austrian émigré philosopher Karl Popper and became central to the thinking of Popper’s student, the highly successful hedge fund manager George Soros. And it would form part of the approach to macroeconomics of the Chicago economist Robert Lucas and his followers … although their perspective on the problem and its solution would be very different.

Reflexivity undermines stationarity. This was the essence of ‘Goodhart’s Law’–any business or government policy which assumed stationarity of social and economic relationships was likely to fail because its implementation would alter the behaviour of those affected and therefore destroy that stationarity.

Kay and King, Chapter 3: Radical Uncertainty is Everywhere”

Radical uncertainty also features in Richard Bookstaber’s book “The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction”. Bookstaber identifies four broad phenomena he argues are endemic to financial crises

Emergent phenomena.
“When systemwide dynamics arise unexpectedly out of the activities of individuals in a way that is not simply an aggregation of that behavior, the result is known as emergence”.

Non-ergodicity.
“An ergodic process … is one that does not vary with time or experience.
Our world is not ergodic—yet economists treat it as though it is.”

Radical uncertainty.
“Emergent phenomena and non-ergodic processes combine to create outcomes that do not fit inside defined probability distributions.”

Computational irreducibility.
“There is no formula that allows us to fast-forward to find out what the result will be. The world cannot be solved; it has to be lived.

Bookstaber, Chapter 2: Being Human

If you want to delve into the detail of why the world can be radically uncertain then Bookstaber arguably offers the more detailed account; albeit one couched in technical language like emergent phenomena, ergodicity and computational irreducibility. In Chapter 10 he lays out the ways in which an agent based modelling approach to the problem of radical uncertainty would need to specify the complexity of the system in a structured way that takes account of the amount of information required to describe the system and the connectedness of its components. Bookstaber also offers examples of emergent phenomena in seemingly simple systems (e.g. Gary Conways’s “Game of Life”) which give rise to surprisingly complex outcomes.

I am not sure if either book makes this point explicitly but I think there is also an underlying theme in which the models that provide the illusion of control over an uncertain future create an incentive to “manage” risk in ways that increases the odds of bad outcomes based on insufficient resilience. That seems to be the clear implication of Kay and King’s discussion of the limits of finance theory (Chapter 17: The World of Finance). They acknowledge the value of the intellectual rigour built on the contributions of Harry Markowitz, William Sharpe and Eugene Fama but highlight the ways in which it has failed to live up to its promiseI .

We note two very different demonstrations of that failure. One is that the models used by regulators and financial institutions, directly derived from academic research in finance, not only failed to prevent the 2007–08 crisis but actively contributed to it. Another is to look at the achievements of the most successful investors of the era – Warren Buffett, George Soros and Jim Simons. Each has built fortunes of tens of billions of dollars. They are representative of three very different styles of investing.

Kay and King, Chapter 17 The World of Finance

I plan to do one more post exploring the ways in which we navigate a world of radical uncertainty.

Tony (From the Outside)

Probabilities disguising uncertainty – Part II

This behavior makes one blind to all sorts of things. 

The first blind spot … is that it treats uncertain events – items of unknowable incidence and severity – as if they were risks that could be estimated probabilistically. 

Epsilon Theory ; “Lack of Imagination” 14 March 2020

One of my recent posts drew attention to an article by John Kay promoting a book he has co-written with Mervyn King on the topic of “radical uncertainty”. Epsilon Theory offers another useful perspective on the ways in which extending probabilistic thinking beyond its zone of validity can cause us to miss the big picture.

The Epsilon Theory post focusses on the Covid 19 fallout currently playing out but is also worth reading for the broader challenges it offers anyone trying to use models and probabilities to manage real world outcomes …

Tony

The “skin in the game” argument for more common equity

One of the traditional arguments for higher common equity requirements is that it increases the shareholders’ “skin in the game” and thereby creates an incentive to be more diligent and conservative in managing risk.

This principle is true up to a point but I believe more common equity mostly generates this desirable risk management incentive when the extra skin in the game (aka capital) is addressing a problem of too little capital. It is much less obvious that more capital promotes more conservative risk appetite for a bank that already has a strong capital position.

In the “too little” capital scenarios, shareholders confronted with a material risk of failure, but limited downside (because they have only a small amount of capital invested), have an incentive to take large risks with uncertain payoffs. That is clearly undesirable but it is not a fair description of the risk reward payoff confronting bank shareholders who have already committed substantial increased common equity in response to the new benchmarks of what it takes to be deemed a strong bank.

The European Systemic Risk Board published some interesting research on this question in a paper titled “Has regulatory capital made banks safer? Skin in the game vs moral hazard” . I have copied the abstract below which summarises the key conclusions.

Abstract: The paper evaluates the impact of macroprudential capital regulation on bank capital, risk taking behaviour, and solvency. The identification relies on the policy change in bank-level capital requirements across systemically important banks in Europe. A one percentage point hike in capital requirements leads to an average CET1 capital increase of 13 percent and no evidence of reduction in assets. The increase in capital comes at a cost. The paper documents robust evidence on the existence of substitution effects toward riskier assets. The risk taking behavior is predominantly driven by large and less profitable banks: large wholesale funded banks show less risk taking, and large banks relying on internal ratings based approach successfully disguise their risk taking. In terms of overall impact on solvency, the higher risk taking crowds-out the positive effect of increased capital.

I have only skimmed the paper thus far and have reservations regarding how they measure increased risk. As I understand it, the increased riskiness the analysis measures is based on increases in average risk weights. It was not clear how the analysis distinguished changes in portfolio riskiness from changes in the risk weight measure. That said, the overall conclusions seem intuitively right.

Tony