Flawed models + history ignorance = disastrous policy-making
The author designs a model that leads him to conclude that
contrary to conventional wisdom, competition can make banks more reluctant to take excessive risks: As competition intensifies and margins decline, banks face more-binding threats of failure, to which they may respond by reducing their risk-taking. Yet, at the same time, banks become riskier. This is because the direct, destabilizing effect of lower margins outweighs the disciplining effect of competition; moreover, a substantial rise in competition reduces banks’ incentive to build precautionary capital buffers. A key implication is that the effects of competition on risk-taking and on failure risk can move in opposite directions.
The paper declares that “a decline in margins caused by heightened competition [would lead to] a more conservative stance and less aggressive risk-taking” and that “high profits that can be reaped in less-competitive environment [would allow] for more risk-taking”. The author describes the literature as generally believing that… the opposite is true.
My first reaction is: it is so much more complex than that. Banks have different cultures and risk-aversions within the same context. This is why some fail why others remain strong throughout a given period. There is no pre-programmed behaviour that would push banks to act in a certain way. Within a given banking system and at a given point in time, banks offer investors a range of RoE and volatility of returns: investors can then diversify their banking investment portfolio according to their need and risk appetite. Some will prefer a high return/high volatility of earnings/high risk bank and require a higher cost of capital; others will go for stability and lower returns*. Banks are not trying to maximise RoE. Banks are trying to maximise RoE on a risk-adjusted basis**.
According to the author, historical experience demonstrates that competition makes banks riskier, and that this prediction is consistent with empirical evidence. Wait… Really? What kind of ‘competition’? In what context? Under what sort of banking, political and economic framework? Historical experiences of as close as possible to pure competition show the exact opposite of this claim. Indeed, the empirical evidence the author refers to is this 2009 paper, which statistically analysed current data from the Bankscope database. There is no reference to historical events, political circumstances, banking design and restrictive regulations. All types of banks, from the US granular unit banks to the Chinese giant government-controlled oligopoly banks are aggregated and ‘conclusion’ on competition is reached. This is bad research.
The paper is full of dubious claims, such as this one:
In our model, highly profitable banks optimally build equity capital buffers to guard against failure, whereas banks operating in more-competitive environments seek to minimize their capital.
Once again, this conclusion is so far from historical reality as to be meaningless. There are examples of oligopoly-type banks operating on thin capital buffers and free banking experiences in Scotland showed that competition led banks to accumulate large capital buffers. We also find examples of banks with local monopolies operating on a small equity base due to a particular banking structure. Banking system design and risk culture are key. Research pieces such as this one are over-simplistic and provide policy-makers with the wrong diagnoses.
But looking at the model’s assumptions, it becomes clear that the analysis was made in a vacuum. The banking system consists of one bank (no lending competition…), owned and run by infinitely lived shareholders, whose competitors are money market funds. Economics and finance is a matter of time-constrained resources management, and infinitely lived owners/bankers have different incentives and priorities from those whose life is finite and uncertain… Other unrealistic assumptions that skew the models include: money market funds that effectively compete with the (only) bank’s deposit rate for depositors’ funds, or very granular time periods during which the bank has no flexibility to raise capital or deal with its balance sheet issues, resulting in very binary outcomes at the end of each period. And, of course, this ‘competitive’ system includes deposit insurance and capital requirements.
In the end, we can reasonably question what this model proves, if anything. This exemplifies the problems I have with financial and economic models in general: they are not reliable. And this is an understatement.
For instance, the conclusions reached by this model are precisely the opposite of those reached by other models built for previous similar studies. What does this mean? Which ones are right (if any)? Which ones are wrong (if any)?
I used to be a scientist and have a Master’s degree in engineering. I also used to believe that everything could be modelled. I was wrong. Economic systems do not benefit from universal physical constants.
When I started my career in financial services, I tended to use mathematical and statistical tools (trends, correlations) to forecast some financial information. A senior analyst warned me: “it never works.” He was right, but I didn’t believe him to start with and continued experimenting nonetheless, until I noticed that ‘intuition’, ‘knowledge’ and ‘wisdom’ would serve me more than mathematics. What was I trying to forecast? Revenues of a listed firm. Nothing really complex.
Unfortunately, most economists and central bankers still believe they can model the whole economy – or, as seen above, the whole banking system – using maths. Given the inaccuracy of quarterly revenue and profit per share estimates, modelling something infinitely more complex (the whole economy), which involves an infinite number of ever-changing, erratic and poorly-rational variables (i.e. humans), sounds like an enormous, if not impossible, project. Still, this is what academic DSGE models have been trying to do for a few decades. With great success as we have been witnessing since 2007.
I agree (for once) with this post by Noah Smith, who rightly asks:
So why doesn’t anyone in the finance industry use them? Maybe industry is just slow to catch on. But with so many billions upon billions of dollars on the line, and so many DSGE models to choose from, you would think someone at some big bank or macro hedge fund somewhere would be running a DSGE model. And yet after asking around pretty extensively, I can’t find anybody who is.
One unsettling possibility is that the academic macroeconomists of the ’70s and ’80s simply bit off more than they could chew. Modeling a big thing (like the economy) as the outcome of a bunch of little things (like the decisions of consumers and companies) is a difficult task. Maybe no DSGE is going to do the job. And maybe finance industry people simply realize this.
Of course, many people working in finance still try to forecast the future. But their forecasts are in competition with each other and none of them has the power to make the decision that central banks or policy-makers can make for the whole economy, with potentially disastrous consequences. The Fed’s, ECB’s and BoE’s forecasts have constantly been way off the mark throughout the crisis. I also know from certain sources that some of central banks’ models are less elaborate than rating agencies’. I’ll let the reader conclude.
The example above clearly demonstrates why economics should avoid relying on mathematics and simplified assumptions. Time to stop ignoring history.
* In a banking system with no deposit insurance, the same applies: depositors have the choice between higher deposit rates/higher risk or lower and safer ones.
** This is where moral hazard and bad regulatory incentives applies: by distorting risk-aversion and assessment.