Small banks …

This post by Cetier on the RBNZ Financial Stability Report poses an interesting question about the future of small banks. He notes that the big banks seem to be doing fine but that small NZ banks are struggling to cover their cost of capital. This disparity between big and small banks also seems to be feature of the Australian banking system. It also looks like big banks in the USA are getting bigger at the expense of the small banks.

There is a perennial question of whether small banks need some support (possibly in the form of less onerous regulation) so that they can offer a source of competition to the larger banks. This is a policy choice that the USA has very deliberately made but it has been argued that this is one of the factors that contributed to the recent spate of bank failures.

This is part of a larger conversation about the tension between competition and financial stability. Marc Rubinstein did a good post on this question which I covered here.

I don’t have any answers but the question is one that I think will get more focus as the US considers its response to the most recent case studies in why banks fail. I don’t have enough expertise on the US Banking system to offer an informed opinion but the Bankers Policy Institute does offer an alternative perspective that argues that the failures were more a question of bad management and lax supervision than of regulation per se. I can say that the risks these US banks were running did seem to clearly violate the principles of Banking 101.

Let me know what I am missing …

Tony – From the Outside

Bank risk culture

APRA today released the results of a survey of how Australian ADI’s (i.e. “banks” for anyone reading outside Australia) believe they are doing on the question of risk culture.

What I found most interesting was the gap between what the executives believe and what the people on the front line believe as reflected in these two charts

Figure 5. Psychological safety – Management vs Individual Contributors
Figure 8: Decision-making effectiveness – Executives vs Individual Contributors

Creating an environment in which people truely feel safe to speak up is hard but it does appear that there is a lot more work to do. The first book that comes to mind when I read about risk culture is Creativity, Inc which explores (amongst other things) how Pixar institutionalised “Candour” into the business. Michael Mauboussin’s “The Success Equation” and Greg Ip’s “Foolproof” are also useful guides for avoiding hubris and general overconfidence in your risk management process.

I have to say that the challenge of speaking up is one that resonates with me from my time in the front line and it appears that there is still more to do. Personally I quite liked Pixar’s “brains trust” solution.

Tony – From the Outside

Predicting phase transitions

I am not sure the modelling methodology described in this article is quite as good as the title suggests…

“Chaos Researchers Can Now Predict Perilous Points of No Return”

… but it would be very interesting if it lives up to the claims made in the article. It is a quick read and the subject matter seems worth keeping an eye on.

Here are two short extracts to give you a flavour of the claims made

A custom-built machine learning algorithm can predict when a complex system is about to switch to a wildly different mode of behavior.

In a series of recent papers, researchers have shown that machine learning algorithms can predict tipping-point transitions in archetypal examples of such “nonstationary” systems, as well as features of their behavior after they’ve tipped. The surprisingly powerful new techniques could one day find applications in climate science, ecology, epidemiology and many other fields.

Tony – From the Outside

Red flags in financial services

Nice podcast from Odd Lots discussing the Wirecard fraud. Lots of insights but my favourite is to be wary when you see a financial services company exhibit high growth while maintaining profitability.

There may be exceptions to the rule but that is not how the financial services market normally works.

podcasts.apple.com/au/podcast/odd-lots/id1056200096

Tony — From the Outside

What Michael Lewis loves about experts

This link takes you to the last of a 7 episode podcast Michael Lewis has done on the role of experts

podcasts.apple.com/au/podcast/against-the-rules-with-michael-lewis/id1455379351

The whole series is highly recommended but I especially like this quote in which he attempts to sum up the series

“Life eventually humbles us all. What I love about experts, the best of them anyway, is that they get to their humility early, they have to, it’s part of who they are, it’s necessary for what they are doing. They set out to get to the bottom of something that has no bottom, and so they are reminded, constantly, of what they don’t know. They move through the world focussed not on what they know but on what they might find out.”

In that spirit, let me know what I am missing

Tony – From the Outside

Constructive dissent

I am currently reading “Thinking in Bets” by Annie Duke. It is early days but I suspect that this is a book that has some useful things to say about creating the kinds of corporate culture that truely reflect the values espoused in corporate mission statements. It is a truth that actions speak louder than words and she cites a practice employed by the American Foreign Service Association which has not one but four awards for employees who have exhibited behaviours that demonstrate initiative, integrity, intellectual courage and constructive dissent.

The attached quote comes from the AFSA website setting out the criteria employed for these awards

Criteria for the Dissent Awards

The awards are for Foreign Service employees who have “exhibited extraordinary accomplishment involving initiative, integrity, intellectual courage and constructive dissent”. The awards publicly recognize individuals who have demonstrated the intellectual courage to challenge the system from within, to question the status quo and take a stand, no matter the sensitivity of the issue or the consequences of their actions. The issue does not have to be related to foreign policy. It can involve a management issue, consular policy, or, in the case of the recently established F. Allen “Tex” Harris Award, the willingness of a Foreign Service Specialist to take an unpopular stand, to go out on a limb, or to stick his/her neck out in a way that involves some risk

https://www.afsa.org/constructive-dissent-awards

When safety proves dangerous …

… is the title of a post on the Farnham Street blog that provides a useful reminder of the problem of “risk compensation”; i.e. the way in which measures designed to make us safer can be a perverse prompt for us to take more risk because we feel safer. I want to explore how these ideas apply to bank capital requirements but will first outline the basic ideas covered by Farnham Street.

we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation.

The classic example is car safety measures such as improved tyres, ABS braking systems, seat belts and crumple zones designed to protect the driver and passengers. These have helped reduce car fatality rates for the people inside the car but not necessarily reduced accident rates given that drivers tend to drive faster and more aggressively because they can. Pedestrians are also at greater risk.

Farnham Street suggests the following lessons for dealing with the problem risk compensation:

  1. Safety measures are likely to be more effective is they are less visible
  2. Measures designed to promote prudent behaviour are likely to be more effective than measures which make risky behaviour safer
  3. Recognise that sometimes it is better to do nothing if the actions we take just leads to an offset in risk behaviour somewhere else
  4. If we do make changes then recognise that we may have to put in place other rules to ensure the offsetting risk compensating behaviour is controlled
  5. Finally (and a variation on #3), recognise that making people feel less safe can actually lead to safer behaviour.

If you are interested in this topic then I can also recommend Greg Ip’s book “Foolproof” which offers a good overview of the problem of risk compensation.

Applying these principles to bank capital requirements

The one area where I would take issue with the Farnham Street post is where it argues that bailouts and other protective mechanisms contributed to scale of the 2008 financial crisis because they led banks to take greater risks. There is no question that the scale of the crisis was amplified by the risks that banks took but it is less obvious to me that the bailouts created this problem.

The bailouts were a response to the problem that banks were too big to fail but I can’t see how they created this problem; especially given that the build up of risk preceded the bailouts. Bailouts were a response to the fact that the conventional bankruptcy and restructure process employed to deal with the failure of non-financial firms simply did not work for financial firms.

It is often asserted that bankers took risks because they expected that they would be bailed out; i.e/ that banks deliberately and consciously took risk on the basis that they would be bailed out. I can’t speak for banks as a whole but I have never witnessed that belief in the four decades that I worked in the Australian banking system. Never attribute to malice what can be equally explained by mistaken beliefs. I did see bankers placing excessive faith in the economic capital models that told them they could safely operate with reduced levels of capital. That illusion of knowledge and control is however a different problem altogether, largely to do with not properly understanding the distinction between risk and uncertainty (see here and here).

If I am right, that would suggest that making banks hold more capital might initially make them safer but might also lead to banks looking for ways to take more risk. This is a key reason why I think the answer to safer banks is not just making them hold higher and higher levels of common equity. More common equity is definitely a big part of the answer but one of the real innovations of Basel 3 was the development of new forms of loss absorbing capital that allow banks to be recapitalised by bail-in rather than bail-out.

If you want to go down the common equity is the only solution path then it will be important to ensure that Farnham Street Rule #4 above is respected; i.e. bank supervisors will need to ensure that banks do not simply end up taking risks in places that regulation or supervision does not cover. This is not a set and forget strategy based on the idea that increased “skin in the game” will automatically lead to better risk management.

Based on my experience, the risk of common equity ownership being diluted by the conversion of this “bail-in” capital is a far more effective constraint on risk taking than simply requiring banks to hold very large amounts of common equity. I think the Australian banking system has this balance about right. The Common Equity Tier 1 requirement is calibrated to a level intended to make banks “Unquestionably Strong”. Stress testing suggest that this level of capital is likely to be more than sufficient for well managed banks operating with sensible risk appetites but banks (the larger ones in particular) are also required to maintain a supplementary pool of capital that can be converted to common equity should it be required. The risk that this might be converted into a new pool of dilutive equity is a powerful incentive to not push the boundaries of risk appetite.

Tony – From the Outside

Navigating a radically uncertain world

The distinction between risk and uncertainty is a long running area of interest for me so I have enjoyed reading John Kay and Mervyn King’s book “Radical Uncertainty: Decision-Making for an Unknowable Future”. My initial post on the book offered an overview of the content and a subsequent post explored Kay and King’s analysis of why the world is prone to radical uncertainty.

This post looks at how Kay and King propose that we navigate a world that is prone to radical uncertainty. Kay and King start (Ch 8) with the question of what it means to make rational choices.

No surprises that the answer from their perspective is not the pursuit of maximum expected value based on a priori assumptions of what is rational in a world ruled by probability (“axiomatic reasoning”). They concede that there are some problems that can be solved this way. Games of chance where you get repeated opportunities to play the odds is one, but Kay and King are firmly in the camp that the real world is, for the most part, too complex and unknowable to rely on this approach for the big issues.

It is not just that these models do not offer any useful insight into these bigger world choices. They argue, convincingly I think, that these types of precise quantitative models can also tend to create an illusion of knowledge and control that can render the systems we are seeking to understand and manage even more fragile and more prone to uncertainty. An obvious example of this risk is the way in which the advanced measures of bank capital requirements introduced under Basel II tended to encourage banks to take (and bank supervisors to approve) more leverage.

Their argument broadly makes sense to me but there was nothing particularly new or noteworthy in this part of the book. It goes over familiar ground covered equally well by other writers – see for example these posts Epsilon Theory, Bank Underground, Paul Wilmott and David Orrell, Andrew Haldane which discuss contributions these authors have made to the debate.

However, there were two things I found especially interesting in their analysis.

  • One was the argument that the “biases” catalogued by behavioural finance were not necessarily irrational when applied to a radically uncertain world.
  • The other was the emphasis they place on the idea of employing abductive reasoning and reference narratives to help navigate this radically uncertain future.

Behavioural Finance

Kay and King argue that some of the behaviours that behavioural finance deems to be irrational or biased might be better interpreted as sensible rules of thumbs that people have developed to deal with an uncertain world. They are particularly critical of the way behavioural finance is used to justify “nudging” people to what behavioural finance deems to be rational.

Behavioural economics has contributed to our understanding of decision-making in business, finance and government by introducing observation of how people actually behave. But, like the proselytisers for the universal application of probabilistic reasoning, practitioners and admirers of behavioural economics have made claims far more extensive than could be justified by their findings…

…. a philosophy of nudging carries the risk that nudgers claim to know more about an uncertain world than they and their nudgees do or could know.

I struggled with this part of the book because I have generally found behavioural finance insights quite useful for understanding what is going on. The book reads at times like behavioural finance as a whole was a wrong turn but I think the quote above clarifies that they do see value in it provided the proponents don’t push the arguments too far. In particular they are arguing that rules of thumb that have been tested and developed over time deserve greater respect.

Abductive Reasoning and Reference Narratives

The part of Kay and King’s book I found most interesting was their argument that “abductive reasoning” and “reference narratives” are a useful way of mapping our understanding of what is going on and helping us make the right choices to navigate a world prone to enter the domain of radical uncertainty.

If we go back to first principles it could be argued that the test of rationality is that the decisions we make are based on reasonable beliefs about the world and internal consistency. The problem, Kay and King argue, is that this approach still does not address the fundamental question of whether we can ever really understand a radically uncertain world. The truely rational approach to decision making has to be resilient to the fact that our future is shaped by external events taking paths that we have no way of predicting.

The rational answer for Kay and King lies in an “abductive” approach to reasoning. I must confess that I had to look this up (and my spell checker still struggles with it) but it turns out that this is a style of reasoning that works with the available (not to mention often incomplete and ambiguous) information to form educated guesses that seek to explain what we are seeing.

Abduction is similar to induction in that it starts with observations. Where it differs is what the abductive process does with the evidence. Induction seeks to derive general or universal principles from the evidence. Abduction in contrast is context specific. It looks at the evidence and tries to fit “an explanation” of what is going on while being careful to avoid treating it as “the explanation” of what is going on.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Reference Narratives

If I have understood their argument correctly, the explanations or hypotheses generated by this abductive style of reasoning are expressed in “reference narratives” which we use to explain to ourselves and others what we are observing. These high level reference narratives can then provide a basis for longer term planning and a framework for day-to-day choices.

Deductive, inductive and abductive reasoning each have a role to play in understanding the world, and as we move to larger worlds the role of the inductive and abductive increases relative to the deductive. And when events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable.

Kay and King acknowledge that this approach is far from foolproof and devote a considerable part of their book to what distinguishes good narratives from bad and how to avoid the narrative being corrupted by groupthink.

Good and Bad Reference Narratives

Kay and King argue that credibility is a core feature distinguishing good and bad narratives. A good narrative offers a coherent and internally consistent explanation but it also needs to avoid over-reach. A warning sign for a bad narrative is one that seeks to explain everything. This is especially important given that our species seems to be irresistibly drawn to grand narratives – the simpler the better.

Our need for narratives is so strong that many people experience a need for an overarching narrative–some unifying explanatory theme or group of related themes with very general applicability. These grand narratives may help them believe that complexity can be managed, that there exists some story which describes ‘the world as it really is’. Every new experience or piece of information can be interpreted in the light of that overarching narrative.

Kay and King use the fox and the hedgehog analogy to illustrate their arguement that we should always be sceptical of the capacity of any one narrative to explain everything,

…. The hedgehog knows one big thing, the fox many little things. The hedgehog subscribes to some overarching narrative; the fox is sceptical about the power of any overarching narrative. The hedgehog approaches most uncertainties with strong priors; the fox attempts to assemble evidence before forming a view of ‘what is going on here’.

Using Reference Narratives

Kay and King cite the use of scenario based planing as an example of using a reference narrative to explore exposure to radical uncertainty and build resilience but they caution against trying too hard to assign probabilities to scenarios. This I think is a point well made and something that I have covered in other posts (see here and here).

Scenarios are useful ways of beginning to come to terms with an uncertain future. But to ascribe a probability to any particular scenario is misconceived…..

Scenario planning is a way of ordering thoughts about the future, not of predicting it.

The purpose is … to provide a comprehensive framework for setting out the issues with which any business must deal: identifying markets, meeting competition, hiring people, premises and equipment. Even though the business plan is mostly numbers–many people will describe the spreadsheet as a model–it is best thought of as a narrative. The exercise of preparing the plan forces the author to translate a vision into words and numbers in order to tell a coherent and credible story.

Kay and King argue that reference narratives are a way of bringing structure and conviction to the judgment, instinct and emotion that people bring to making decisions about an uncertain future

We make decisions using judgement, instinct and emotions. And when we explain the decisions we have made, either to ourselves or to others, our explanation usually takes narrative form. As David Tuckett, a social scientist and psychoanalyst, has argued, decisions require us ‘to feel sufficiently convinced about the anticipated outcomes to act’. Narratives are the mechanism by which conviction is developed. Narratives underpin our sense of identity, and enable us to recreate decisions of the past and imagine decisions we will face in the future.

Given the importance they assign to narratives, Kay and King similarly emphasise the importance of having a good process for challenging the narrative and avoiding groupthink.

‘Gentlemen, I take it we are all in complete agreement on the decision here. Then, I propose we postpone further discussion of this matter until the next meeting to give ourselves time to develop disagreement, and perhaps gain some understanding of what the decision is all about.’

Alfred P. Sloan (Long time president chairman and CEO of General Motors Corporation) quoted in the introduction to Ch 16: Challenging Narratives

These extracts from their book nicely captures the essence of their argument

Knowledge does not advance through a mechanical process of revising the probabilities people attach to a known list of possible future outcomes as they watch for the twitches on the Bayesian dial. Instead, current conventional wisdom is embodied in a collective narrative which changes in response to debate and challenge. Mostly, the narrative changes incrementally, as the prevalent account of ‘what is going on here’ becomes more complete. Sometimes, the narrative changes discontinuously – the process of paradigm shift described by the American philosopher of science Thomas Kuhn.

the mark of the first-rate decision-maker confronted by radical uncertainty is to organise action around a reference narrative while still being open to both the possibility that this narrative is false and that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.

Kay and King argue that the aim in challenging the reference narrative is not simply to find the best possible explanation of what is going on. That in a sense is an almost impossible task given the premise that the world is inherently unpredictable. The objective is to find a narrative that seems to offer a useful guide to what is going on but not hold too tightly to it. The challenge process also tests the weaknesses of plans of action based on the reference narrative and, in doing so, progressively secures greater robustness and resilience.


The quote below repeats a point covered above but it does nicely capture their argument that the pursuit of quantitative precision can be a distraction from the broader objective of having a robust and resilient process. By all means be as rigorous and precise as possible but recognise the risk that the probabilities you assign to scenarios and “risks” may end up simply serving to disguise inherent uncertainties that cannot be managed by measurement.

The attempt to construct probabilities is a distraction from the more useful task of trying to produce a robust and resilient defence capability to deal with many contingencies, few of which can be described in any but the sketchiest of detail.

robustness and resilience, not the assignment of arbitrary probabilities to a more or less infinite list of possible contingencies, are the key characteristics of a considered military response to radical uncertainty. And we believe the same is true of strategy formulation in business and finance, for companies and households.

Summing Up

Overall a thought provoking book. I am not yet sure that I am ready to embrace all of their proposed solutions. In particular, I am not entirely comfortable with the criticisms they make of risk maps, bayesian decision models and behavioural finance. That said, I do think they are starting with the right questions and the reference narrative approach is something that I plan to explore in more depth.

I had not thought of it this way previously but the objective of being “Unquestionably Strong” that was recommended by the 2014 Australian Financial System Inquiry and subsequently fleshed out by APRA can be interpreted as an example of a reference narrative that has guided the capital management strategies of the Australian banks.

Tony – From The Outside

Why we fail to prepare for disasters

Tim Harford (The Undercover Economist) offers a short and readable account here of some of the reasons why, faced with clear risks, we still fail to act. We can see the problem, typically one of many, but don’t do enough to manage or mitigate the risk. New Orleans’ experiences with severe weather events features prominently as does (not surprisingly) COVID 19.

This, then, is why you and I did not see this coming: we couldn’t grasp the scale of the threat; we took complacent cues from each other, rather than digesting the logic of the reports from China and Italy; we retained a sunny optimism that no matter how bad things got, we personally would escape harm; we could not grasp what an exponentially growing epidemic really means; and our wishful thinking pushed us to look for reasons to ignore the danger.

Why we fail to prepare for disasters; Tim Harford (The Undercover Economist)

Another big part of the problem is that the cost of being fully prepared can be more than we are willing to pay. Especially when there is continuous pressure to find cost economies in the here and now

Serious scenarios are useful, but … no use if they are not taken seriously. That means spending money on research that may never pay off, or on emergency capacity that may never be used. It is not easy to justify such investments with the day-to-day logic of efficiency.

So the key points I took from his post:

  • Sometimes it can be something genuinely new and unexpected (i.e. Black Swan events) but risks we are well aware of can be equally damaging
  • Part of the problem is that we are social animals and take our cues from what the rest of the herd is doing (“normalcy bias” or “negative panic”)
  • Even where we understand the statistics and know that someone will be impacted, we tend to assume it will be someone else or someone else’s family (“optimism bias”)
  • We are especially bad at understanding risks that have an exponential driver (“exponential myopia”)
  • We are also quite good at finding reasons to justify ignoring risks we want to ignore or otherwise find inconvenient (“wishful thinking”)
  • Last, but far from least, efficiency is the enemy of resilience.

We need to remember that most of the factors listed above can also be useful in many other contexts (arguably most of the time). A tendency not to panic can be pretty useful and optimism has helped dreamers and ordinary people achieve many great things that have benefited the herd. Efficiency as a rule seems like a good thing to strive for.

Harford does not offer any easy answers but his post touches on issues that I have also been considering in Kay and King’s book titled “Radical Uncertainty: Decision-Making for an Unknowable Future”. I have done a couple of posts on that book already (here and here) and am working on a final one that focuses on Chapters 8-16 which set out their ideas for how we navigate a world prone to radical uncertainty.

Tony – From the Outside