The money multiplier is alive

Remember, just a few years ago when a number of economic bloggers tried to assure us, based on some misunderstandings, that banks didn’t lend out reserves and that the ‘money multiplier was dead’?

I wrote several posts explaining why those views were wrong, from debunking endogenous money theory, to highlighting that a low multiplier did not imply that it, and the basis of fractional reserve theory, was ‘dead’.

Even within the economic community that still believed in the money multiplier, there were highly unrealistic (and pessimistic) expectations: high (if not hyper) inflation would strike within a few years we were being told, as the first round of quantitative easing was announced. Of course those views were also wrong: the banking system cannot immediately adjust to a large injection of reserves; even absent interest on excess reserves, it takes decades for new reserves to expand the money supply as lending opportunities are limited at a given point in time.

A few years later, it is time for those claims to face scrutiny. So let’s take a look at what really happened to the US banking system.

This is the M2 multiplier:

M2 MM

As already pointed out several years ago, the multiplier is low; much lower than it used to be between 1980 and 2009. But it is not unusual and this pattern already happened during the Great Depression. See below:

MM Great Depression.JPG

Similarly to the post-Depression years, the multiplier is now increasing again.

Let’s zoom in:

M2 MM 2013-18

Since the end of QE3, the M2 multiplier in the US has increased from 2.9 to 3.7 in barely more than three years. This actually represents a much faster expansion than that followed the Great Depression: between 1940 and 1950, it increased from 2.5 to 3.5, and from 1950 to 1960, it increased from 3.5 to 4.2.

Unsurprisingly, this increase occurred as excess reserves finally started to decline sustainably:

US Excess Reserves

As we all know, following the 1950s, the multiplier eventually went on increasing for a couple more decades, reaching highs during the stagflation of the 1970s and early 1980s. Unless a new major crisis strikes, it is likely that our multiplier will follow the same trajectory, although I am a little worried about the rapid pace at which it is currently increasing. One thing is certain: the money multiplier is alive and well.

Advertisements

A boring story of critical Basel risk-weight differentials

After years of negotiations, international banking regulators have finally come up with an apparent finalisation of Basel 3 standards. Warning: this post is going to be quite technical, and clearly not as exciting as topics such as monetary policy. But in order to understand the fundamental weaknesses of the banking system, it is critical to understand the details of its inner mechanical structure. This is where the Devil is, as they say.

The main, and potentially worrying, evolution of the standards is the

aggregate output floor, which will ensure that banks’ risk-weighted assets (RWAs) generated by internal models are no lower than 72.5% of RWAs as calculated by the Basel III framework’s standardised approaches. Banks will also be required to disclose their RWAs based on these standardised approaches.

What does this imply in practice? A while ago, I described the various methods under which banks were allowed to calculate their ‘risk-weighted assets’, which represent the denominator in their regulatory capitalisation formula:

Banks can calculate the risk-weighs they apply to their assets based on a few different methodologies since the introduction of Basel 2 in the years prior to the crisis. Under the ‘Standardised Method’ (which is similar to Basel 1), risk-weights are defined by regulation. Under the ‘Internal Rating Based’ method, banks can calculate their risk-weights based on internal model calculations. Under IRB, models estimate probability of default (PD), loss given default (LGD), and exposure at default (EAD). IRB is subdivided between Foundation IRB (banks only estimate PD while the two other parameters are provided by regulators) and Advanced IRB (banks use their own estimate of those three parameters). Typically, small banks use the Standardised Method, medium-sized banks F-IRB and large banks A-IRB. Basel 2 wasn’t implemented in the US before the crisis and was only progressively implemented in Europe in the few years preceding the crisis.

While Basel 3 did not make any significant change to those methods, at least in the case of credit risk, regulators have argued for years about the possibility of tightening the flexibility given to banks under IRB.

As followers of this blog already know, I view Basel’s RWA concept as one of the most critical factors which triggered the build-up of financial imbalances that led to the financial crisis. In particular Basel 1 – in place until the mid-2000s in the US – only provided fixed risk-weights – and therefore incentives for bankers to optimise their return per unit of equity by investing in asset classes benefiting from low risk-weights (i.e. real estate, securitised products, OECD sovereign exposures…). This in turn distorted the allocation of capital in the economy out of line with the long-term plan of economic actors.

Since the introduction of IRB and internal models by Basel 2, it has been uncertain whether banks using their own models to calculate risk-weights made it more or less likely for misallocations to develop in a systemic manner. There are two opposing views.

On the one hand, more calculations flexibility can give banks the opportunity to reduce the previously artificially-large risk-weight differential between mortgage and business lending for example, thereby reducing the potential for regulatory-induced misallocation and representing a state of affairs closer to what a free market would look like.

On the other hand, there are also instances of banks mostly ‘gaming’ the system with regulators’ support. Those banks usually understand that regulators see real estate/mortgage lending as safer than other types of activities and consequently attempt to push risk-weights on such lending as low as possible. This often succeeds and gains regulatory approval, and it is not a rare sight to see mortgages being risk-weighted around 10% of their balance sheet value (vs. 50% in Basel 1 and 35% in Basel 2’s standardised method). If they do not succeed in lowering risk-weights on business lending by the same margin, this exacerbates the differential between the two types of activities and adds further incentives for banks to grow their real estate lending business at the expense of more productive lending to private corporations.

Now Basel 3 is finally introducing floor to banks’ internal models. The era of the 10% risk-weighted mortgage has come to an end. Once fully applied, asset classes’ risk weights will not be allowed to be lower than 72.5% of the standardised approach value. Unless I am mistaken, this seems to imply a minimum of about 25% for residential mortgages.

Now whether this is good or bad depends on our starting point. If banks, on aggregate, tended to game the system, amplifying the differential vs. the free market, then this floor is likely to reduce the difference between low and high risk-weight asset classes. However, if on aggregate internal models’ flexibility represented an improvement vs. Basel’s rigid calculation methods, then this new approach will tend to deteriorate the capital allocation capabilities of the banking system.

Indeed, the critical concept here is the differential between Basel and the free market. A free market would also take a view as to how much capital a free banking system should maintain against certain types of exposures. While there would be no risk-weight, market actors would have an average view as to the safe level of leverage that free banks should have according to the structure of their balance sheet. In order to simplify this concept of capital requirements in a free market for this post, we can translate this view into a ‘free market risk-weight-equivalent’. Perhaps I’ll write a more elaborate demonstration another time.

So let’s assume a free market in which, on average, mortgages are risk-weighted at 40%, lending to large international firms at 50% and to SMEs at 70%. As this represents a free market ‘equilibrium’, there is no distortion in the allocation of loanable funds. Both corporate and mortgage banks are able to cover their cost of capital at the margin.

Now some newly-designed regulatory framework called, well… Bern (let’s stick to Swiss cities), comes to the conclusion that mortgages should be risk-weighted 50%, and lending to any corporation 100%. While this represents an increase in the case of all types of exposure, the differential between mortgages and corporations is now 50%, whereas it used to be just 10% and 30% under the free market. As a result, bankers whose specialty was corporate lending will have to adjust the structure of their balance sheet if they wish to maintain the same level of profitability. Corporate lending volume is likely to decline as the least profitable lending opportunities at the margin are not renewed as they now do not cover their cost of capital anymore. The outcome of this alteration in risk-weight hierarchy is a reduction in the supply of loanable funds towards the most capital-intensive asset classes. This reduction potentially leads to unused (or ‘excess’) bank reserves released on the interbank market, lowering funding costs and making it possible to profitably extend credit to marginal borrowers within the ‘cheapest’ (from a capital perspective) asset classes.

So this latest Basel reform, good or bad? Existing research is unclear. As I reported in a previous post, researchers found a set of results that seemed to confirm that German banks on aggregate tended to ‘game’ the system using regulatory-validated internal models. However, Germany is a very peculiar and unique banking market and this result may not apply everywhere, and this research doesn’t tell much about the differential described above.

Another research piece published by the BIS showed some confusing results. Among a portfolio of large banks surveyed by the BIS, median risk-weights on mortgages were down to a mere 17%, whereas median SME corporate lending stood at 60% and large corporate lending 47%. Clearly mortgage lending represented the easiest way for banks on aggregate to optimise their RoE, and it does look like the banks surveyed tried to push risk-weights as low as possible in a number of cases.

We could argue that those risk-weights are too low for all those lending categories and therefore endanger the resilience of the banking system. This is indeed possible, but not my point today.

Today, I am not interested in looking at the absolute required amounts of capital that should be held against exposures, but at the relative amounts of capital across types of exposures. I am exclusively focusing on loanable fund allocation within the economic system as a whole. From a systemic point of view, risk-weight differentials matter considerably as explained above.

Basel 1’s risk weight for all corporate exposures stood at 100%, while Basel 2’s were more granular, with 50% for corporations rated between A- and A+, 100% between BBB- and BBB+ and 150% between B- and BB+. This rating spectrum covers the vast majority of small to large companies.

Basel Risk Weight Comparison

Now look at the differentials. The differentials in risk-weight between the Standardised Method and IRB for an average large industrial company rated in the BBB-range and an average SME rated in the BB-range are respectively 53% and 90%. Between mortgages and the same average corporates under IRB? 30% and 43%, whereas it used to be 65% and 115% under the rigid Basel framework. No wonder business lending growth started slowing down to the benefit of real estate lending since the introduction of Basel.

Historical aggregate lending

How will Basel’s ‘output floors’ affect those differentials? Here again it is hard to say. If the asset class differentials seen under IRB represented those that would be prevalent in a free market, then Basel’s new output floor is a big step backwards. My intuition indeed tells me that IRB was an improvement: banks’ balance sheet mostly comprised commercial loans in the past (see chart above and the table below on US banks from C.A. Phillips ‘Banking and the Business Cycle), and it looks highly unlikely that bankers would held two, three or four times more capital against those loans as against mortgages or sovereign exposures. Seen this way, this latest Basel twist is bad news.

CA Phillips_US Banks Balance Sheet 1928-32

PS: while most economists and virtually all politicians are unaware of the potentially devastating effects of their bank capital requirements alchemy, they do understand that lowering risk-weights/capital requirements on some types of exposures has the effect of boosting them.

Politicians’ latest brilliant idea? Lower requirements for ‘climate financing’ “in a bid to boost the green economy and counter climate change.” Perhaps time they looked at the overall picture and start wondering how on Earth real estate markets, securitised products and sovereign debt are so attractive to the financial sector?

Back to blogging, and to Basel’s latest twists

After a very long break, I finally decided to start blogging again. This break was due to a very busy lifestyle since last summer: getting back home at 11pm on most week days is not necessarily conducive to quality blogging. As a result of the combination of high workloads and plenty of other activities, and with high level of exhaustion as a result, I realised that blogging was at the bottom end of my list of priorities. On my few free weekends, the few hours of relaxation I could get became a much more appealing option.

Moreover, the post-crisis debate around banking theory and regulation had almost vanished from social and mainstream media amid the benign economic environment. The publication frequency of interesting economic research pieces and other thought-provoking economic commentaries had also declined sharply. There wasn’t much to fight for as there was no one to debate. Not even an endogenous money theorist in sight. Motivation was low.

This state of affairs couldn’t last of course. The accumulation of regulatory news and some (admittedly limited) renewed attacks against free markets and banking on social media were enough to reignite my motivation and finally force me to find enough time to put together a few posts. \My non-blogging activities remain pretty time-consuming; therefore posts are likely to be few and far between.

So where to start? The (apparently) final and long-awaited completion of the new Basel 3 rulebook has been recently published. While I was originally planning to dissect some of its features in this post, it rapidly became clear that a whole separate publication was necessary. So there you go: tomorrow I’ll publish another post on my favourite topic, Basel regulations. And it will be wonkish.

What’s going on?

The post-crisis politico-regulatory consensus is breaking down. There have been tensions over the past couple of years, but overall the illusion of harmony between governments, regulators and central bankers remained. As memories from the crisis fade, politicians facing elections are doing their best to undermine the very system they so vehemently used to support.

We already knew that Germans were complaining about the impact of the post-crisis regulatory framework in the small savings and cooperative banks of the country, and that this was reflected to an extent by the debate in the US over the slow disappearance of their small regional banks (while large banks kept growing). We also already knew that Trump could not be counted among the supporters of Dodd-Frank, the US-flavoured implementation of Basel 3. Elsewhere, dissensions had been relatively muted. Until now.

Italy could be about to undermine the whole European banking union concept by attempting to put in place a large bail out of many of its struggling banks. The whole post-EU regulatory framework was set up in order to prevent such discretionary actions and preserve the single market by forcing losses onto certain types of creditors and coming up with detailed bank resolution frameworks. And of course shield the taxpayer from paying the bill. But it was clearly naïve of EU regulators to underestimate political opportunism.

After years of experiencing growing discretionary powers – encouraged by politicians – the ECB is now complaining that the European Commission might restrict its power. They see ‘considerable problems’ with limitations on ‘supervisory flexibility’, according to Reuters. In other words rules are not acceptable, but discretionary power with limited accountability is. Then one wonders why governments don’t feel bound to rules either.

Meanwhile thirteen smaller EU states are rebelling against the new banking rules in the EU, which give less power and discretion to national regulators under the ‘banking union’ concept. In short, everyone wants a banking union and common rules but no one wants to follow those rules. Politics at its best.

And regulators keep making sense, by fining banks “for being late to explain why an announcement was late”, according to City AM.

It’s very unclear what the outcome of all those political changes will have on the international banking system but banks would probably like to avoid another extra decade of regulatory uncertainty.

PS: Due to a very busy schedule, I haven’t been very active recently, but do have a few posts in the pipeline, some based on recent interesting pieces of research. I’ll do my best to publish them as soon as I can.

Intragroup funding: don’t build the wall

Three years ago, I wrote about the importance of intragroup funding, liquidity and capital flows within a banking group composed of multiple entities – often cross-border, as it is common nowadays. This series of posts started by outlining recent empirical evidence, which suggested that intragroup funding – or, as academics called it, ‘internal capital markets’ – benefited banking groups by allowing the efficient transfer of liquidity where and when it was needed within the multinational bank’s legal entity structure, thereby averting crises or at least dampening its effects, and solidifying the group as a whole.

Follow-up posts described historical experiences and compared the relative stability of the US and Canadian 19th century branching systems: Canadian banks demonstrated a much higher level of financial resilience thanks to their ability to open branches nationwide, compared to the great instability and recurrent crises experienced by large US state banks – whose ability to open branches in other states or districts was severely constrained by law – and later ‘unit’ banks, which were not allowed to open branches altogether. The series also included the example of a modern banking model that combined characteristics of both the fragmented 19th century US and great resilience, thanks to its peculiar liability-sharing, funds transfer mechanism and cross-control structure: the German Sparkassen Finanzgruppe (saving banks group; see post here).

Of course, throughout this series I kept pointing out that all empirical and historical evidence actually went against the current regulatory mindset of fragmenting and siloing banking. Since the financial crisis, regulators have fallen into a very damaging fallacy of composition: their belief that making each separate entity of larger integrated banking groups stronger and raising barriers between them will strengthen the system as whole is deeply flawed.

Intragroup funding

Last month, a new paper confirming the critical aspect of intragroup liquidity transfers for financial stability was published (Changing business model in international funding, by Gambacorta, van Rixtel and Schiaffi). This paper investigates whether banks altered their funding profile when the financial crisis struck and money markets froze. More specifically, they looked at changes in the nature (retail, wholesale, intragroup…) and origins (domestic, foreign) of the liabilities at holding and parent company level, as well as at branches and subsidiary levels. And unsurprisingly:

Our main conclusions are as follows. Following the first episodes of turbulence in the interbank market (after 2007:Q2), globally active banks increased their reliance on funding from branches and subsidiaries abroad, and cut back on funding obtained directly by headquarters (cross-border funding). In particular, banks reduced cross-border funding from unrelated banks – eg those that are not part of the same banking group – and from non-bank entities. At the same time, they increased intragroup cross-border liabilities in an attempt to make more efficient use of their internal capital markets.

The authors make a great job at summarising the current literature on the topic: there is overwhelming evidence that banks rely on ‘internal capital markets’ to absorb external liquidity shocks. Yet, the authors also highlight that there have been a few drivers of declining intragroup flows, of which the siloing of liquidity and capital by new regulation has been the main one:

Of these drivers, regulatory reform has been the main catalyst of the profound changes observed in global banking and its funding structures in recent years. This includes most prominently Basel III and structural banking reforms, such as the “ringfencing” of domestic operations and “subsidiarisation”, which requires banks to operate as subsidiaries overseas, with their own capital and liquidity buffers, and funding dedicated to different entities. Moreover, several jurisdictions have implemented enhanced oversight and prudential measures, including local capital, liquidity and funding requirements and restrictions on intragroup financial transfers, promoting “self-sufficiency” and effectively reducing the scope of global banking groups’ internal capital markets (Goldberg and Gupta, 2013). In effect, these regulations restrict the foreign activities of domestic banks and the local activities of foreign banks (“localisation”; Morgan Stanley and Oliver Wyman, 2013).

They point out that

“ring-fencing” and “subsidiarisation” may constrain the efficient allocation of capital and liquidity within a globally active banking group and the functioning of its internal capital markets; in fact, these proposals have led to concerns that structural banking reforms may potentially trap capital and liquidity in local pools.

As I mentioned several years ago in my previous series on the topic, this is a real concern. Driven by their fallacy of composition, and with no empirical evidence to justify their reforms, regulators are weakening the system as a whole, repeating the mistakes of the US of the past. Moreover, disjointed discretionary regulatory actions are likely to make things worse when the next crisis strikes: domestically-focused regulators are likely to attempt to protect their own national banking system, preventing domestic subsidiaries from transferring much-needed liquidity to their parents abroad, resulting in a weakened international financial system.

The long-term consequences of trapping capital and liquidity where they are not of any need is unknown. But, constrained by the new rules, the profit-maximising private enterprises that banks are may well decide that putting those funds to use is better than leaving them idle, even if they would have been even more profitably used elsewhere. In turn, this would distort the allocation of capital in the economy, with potentially dramatic economic outcomes.

Worryingly, it has been recently reported that regulatory agencies had in mind an even more drastic idea: the elimination through subsidiarisation of most, if not all, international branches, trapping further capital and funding within entities that never needed to hold such funds in the past. Whether this measure is implemented in the end remains to be seen but one thing is certain: political agendas lead to the total disregard of empirical and historical evidence. In banking as in politics, the new ideological fracture seems to be ‘open’ vs. ‘closed’.

Thicker capital buffers do not prevent banking crises

Bagehot Capital Quote

I know I complained about the sorry state of academic research on banking in my previous post, but not all research makes me despair. In fact, I have long admired a number of ‘mainstream’ academic researchers, such as Borio, as well as Jordà, Schularick and Taylor. The latter’s research is top-notch and what they built what is surely one the best available historical databases of banking. Thanks to their data collection, they provide academics with resources that go beyond the narrow scope of US banking. Their dataset is available online.

Last September, they published a paper titled Macrofinancial History and the New Business Cycle Facts, which is quite interesting, although not as much as their ground-breaking previous papers. Nevertheless, it is based on excellent datamining and I strongly encourage you to take a look. One of the interesting charts they come up with is the following real house price index aggregated from data in 14 different countries. As we can see, real house prices have remained relatively stable (at least within a range highlighted the black lines I added below) until the 1970s. However they started booming from the 1980s, when Basel artificially lowered real estate lending capital requirements relative to that of other lending types.

Historical Real House Price

But it is their most recent paper that particularly drew my attention. Published just a couple of weeks ago and highlighting Bagehot’s quote at the top of this post, Bank Capital Redux: Solvency, Liquidity, and Crisis argues that, contra the current regulatory logic, higher capital ratios do not prevent financial crises. In their words (my emphasis):

A high capital ratio is a direct measure of a well-funded loss-absorbing buffer. However, more bank capital could reflect more risk-taking on the asset side of the balance sheet. Indeed, we find in fact that there is no statistical evidence of a relationship between higher capital ratios and lower risk of systemic financial crisis. If anything, higher capital is associated with higher risk of financial crisis. Such a finding is consistent with a reverse causality mechanism: the more risks the banking sector takes, the more markets and regulators are going to demand banks to hold higher buffers.

As usual, their data collection is remarkable. This time, they collected Tier 1 capital-equivalent* numbers, as well as other balance sheet items, across 17 countries since the 19th century. Here is the aggregate capital ratio over the period:

Aggregate Capital Ratio

Unlike what most people – and economists – believe, they also demonstrate that capital ratios were on the rise in a number of countries in the years preceding the financial crisis:

Post WW2 Capital Ratios

Their finding is a blow to mainstream regulatory logic: capital ratios are useless at preventing crises and may well be a sign of higher risk-taking.

However, some of their findings do provide some justification for capital regulations. They find that

a more highly levered financial sector at the start of a financial-crisis recession is associated with slower subsequent output growth and a significantly weaker cyclical recovery. Depending on whether bank capital is above or below its historical average, the difference in social output costs are economically sizable.

While the fact that better capitalised banks are more able to lend during the recovery phase of a crisis sounds logical to me, I believe this result requires more in-depth analysis: it is likely that regulators in many countries forced banks to recapitalise after past crises or, as it was the case in the US in the post-WW2 era, that banks were also required to comply with a certain type of leverage ratio. This would have slowed their lending growth and impacted the recovery as they rebuilt their capital base to remain in compliance.

It may also be that, as they highlight in some of their previous research, banks suffered more from real estate lending, which was initially seen as safer and requiring thinner capital buffers, but which ended up damaging their capital position further and for longer periods of time once prices collapsed (relative to financial crises triggered by stock market crashes for instance). Whatever the underlying reason, this finding requires more scrutiny and granular analysis.

They also find

some evidence that higher levels and faster growth of the loan-to-deposit ratio are associated with a higher probability of crisis. The same applies to non-core liabilities: a greater reliance on wholesale funding is also a significant predictor of financial distress. That said, the predictive power of these two alternative funding measures relative to that of credit growth is relatively small.

See below the 17 countries aggregate loans/deposit ratio:

Aggregate Loans to Deposits

This is interesting, as we see that, unlike capital ratios, loans/deposit ratios were quite stable in recent decades relative to long-run average (in particular if we exclude the Great Depression period and its long recovery), around the 100% mark.

However, I will have to disagree with their finding that more ‘wholesale funding’ is a driver behind financial crises, even if there is some truth to it; although I happen to disagree strictly based on the evidence they provide. They base their reasoning on the wrong assumption that all non-deposit liabilities are necessarily other funding sources (see below the breakdown of liability types). This is incorrect: modern large universal banks have very large trading and derivative portfolios, which often account for 20% to 40% of the liability side of their balance sheet (although US banks under US GAAP accounting standards are allowed to net derivatives and therefore report much smaller amounts).

Aggregate Liability Structure

The key to figure out whether a bank is wholesale-funded is simply its loans/deposits ratio. A ratio above 100% indicates that a portion of loans has been funded using non-deposit liabilities. But as we’ve seen above, this ratio has never risen very high in the years preceding the financial crisis and used to be even higher in the 1870s.

Despite those minor disagreements and caveats, their research is of great quality and their dataset an invaluable tool for future analysis.

*Tier 1 capital is a regulatory capital measure introduced by the Basel rulebook

The sorry state of banking research

Economic academic research can be curious. In particular since the financial crisis, academics have focused on proving that free markets were inherently unstable and that government intervention was required to stabilise the economy.

While George Selgin incinerates a recent paper on Canadian private currency, I found three other recent papers that try too hard to convince us that markets aren’t perfect.

The first one, titled Short-termism Spillovers from the Financial Industry, attempts to demonstrate that large listed banks are subject to short-termism in order to meet quarterly earnings figures, and this that short-termism affects their behaviour towards their clients and in turn borrowers’ long-term investment policies. They conclude that short-termism is not optimal from an economic efficiency point of view.

In their words:

First, we find that lenders facing incentives to meet quarterly earnings benchmarks are more likely to extract material benefits from borrowers. Second, lenders with short-termism incentives push relatively high-quality borrowers into material covenant violations because these are precisely the borrowers from whom rents can be extracted. Because unhealthy borrowers are already selected for material covenant violations by lenders both with and without short-termism incentives, only relatively healthy borrowers are left to be targeted by incremental attention. Third, affected borrowers are more likely to reduce capital investment and research and development (R&D) expenditures. Given the selection of higher quality borrowers, it is particularly likely that these real investment effects on borrowers are value-destroying. Finally, we find that the market reaction to announcements of material covenant violations is 88 basis points lower among borrowers whose lenders face short-termism incentives, which suggests that the incremental attention from lenders with short-termism incentives does not improve shareholder value.

While they fall short of recommending government or regulatory intervention to maximise value-enhancing investments, the implication of their paper is clear: free markets do not optimally allocate capital. But don’t take their word for it. This paper is highly problematic for a number of reasons, outlined below.

First, they use equity analysts’ consensus earnings per share forecasts as a benchmark for short-termism, despite the highly inaccurate nature of those estimates. Indeed, those are constantly revised ; and banks are fundamentally very difficult to model due to the opacity of their balance sheet. As a result, analysts’ estimates are often wide off the mark and do not represent a reliable indicator. Sadly, the whole logic of this paper rests on this single benchmark.

Second, this paper makes rather strange assumptions about the utility of covenants in loan documentations. Covenants are usually agreed upon during the negotiations of the lending facility in order to protect the lenders by preventing the borrowers from fundamentally altering the nature of its balance sheet or of its business model. A breach of covenant is a contractual breach that is considered a serious event by the lenders as it implies a decline in asset quality. Yet this paper seems to argue that enforcing covenant is a bad thing, which ends up negatively impacting the borrower’s ability to grow in the long run.

They go as far as qualifying covenant enforcement as ‘extracting rents’ from borrowers. This is incredible: covenants are rules that are in place for a reason. Not to enforce them on a regular basis would undermine the effectiveness of those rules altogether and probably lead to much worse outcomes. Moreover, researchers qualify some of those covenant-breaching borrowers as ‘high-quality’ and ‘financially healthy’. I can assure you that, in the real world, covenant-breaching customers are anything but ‘high-quality’ and are usually flagged as ‘risky’ by bankers.

Third, even assuming their logic and methodology are correct, the effects they find is small: they calculate that borrowers affected by enforced covenant breaches are only 2.4% more likely to cut R&D spending and 4.9% more likely to cut capital expenditure. Borrowers are also only 1.4% more likely to switch lenders for their next loan and financial market reactions are marginal (88bp). Talk about a storm in a teacup.

But more importantly, my main concern is that the authors of this paper never ever benchmark their results. Or, more accurately, they benchmark the results they obtained against a hypothetical ‘social optimal’. As such, they fall in the Nirvana fallacy trap that Selgin also refers to his post: free markets are not perfect but no amount of government intervention could fix those admittedly minor shortcomings.

The second one is titled Macroeconomics of bank capital and liquidity regulations and studies the welfare effects of banking regulations. Or rather, it ‘models’ this welfare under very specific assumptions. So specific actually, that I dismissed the paper straight away.

In my view this paper exemplifies a lot that is wrong with today’s current economic research: it is based on a highly theoretical mathematical model with imbed assumptions and limitations that result in outcomes that do not nearly reflect the real world. Yet, those economists still managed to conclude that “capital and liquidity regulations generally mutually reinforce each other”, and that “the optimal regulatory mix consists of relatively high capital and liquidity requirements” (which they define as a very high 17.3% leverage ratio, more than ten percentage points above that of most banks today). They evidently conclude that their analysis provides broad support for Basel III’s regulatory framework, consequently seen as welfare-enhancing even though it doesn’t go as far as those economists would like.

Well… the one huge issue with this paper rests on this particular assumption underlying the trade-off faced by regulators in their mathematical model:

On the one hand, banking regulation may reduce the supply of credit to the economy. On the other hand, it improves credit quality and allocative efficiency. Accordingly, regulation tends to result in less, but more productive lending.

This is my reaction to this sort of nonsense:

Pic note

There is not a single glimpse of reality in believing that regulation is more effective than free markets at allocating capital in the economy. If anything, as I highlighted so many times before, regulation has diverted the allocation of credit from productive uses (i.e. commercial and industrial loans) towards unproductive ones (i.e. real estate), which has been economically damaging and one of the reasons behind the financial crisis.

As a result, this paper includes some of its own conclusions in its assumptions, leading to circular reasoning: banking regulation improves allocative efficiency, therefore we need banking regulation.

Finally, Bank Capital and Dividend Externalities highlights that banks fail to ‘internalise’ the effects of dividend payments and capital policy on the stability of the wider financial system. The researchers theorise that banks increasing their dividends harm the claims that its own bank creditors have on its balance sheet in a bankruptcy scenario, thereby weakening the financial strength of the whole network. In such a system, they state that bank capital becomes a ‘public good’. The logical conclusion to this lack of systemic coordination is obviously government intervention: regulators should put dividend restriction measures in place when necessary.

But, here again, this paper suffers from major design flaws:

– Bank creditors are the only ones considered, despite the fact that banks have a multitude of creditors, including depositors. If dividend payments harm bank and other money market creditors, they also surely impact depositors and bondholders.

– They assume that dividend payments decrease the value of the bank by lowering its probability of survival. They reject the signalling theory of dividend payments despite admitting it had some backing in the literature: reducing dividend is a negative signal about the financial health of the institution.

– Their empirical evidence is limited to a couple of data points taken during the latest financial crisis: a couple of banks increased dividends before collapsing. They do not take into account the fact that those followed Bear Stearns’ bail-out by the Fed, which sent a specific TBTF signal to both the market and bankers themselves.

– As often in the banking literature, they make a big deal of interconnectedness, yet seem to forget that all industries have interconnected members. The decisions made by a large automaker also affect its whole supply chain and their employees. I have yet to see tens, if not hundreds, of research papers arguing that automakers fail to ‘internalise’ the impacts of their decisions, which are not always ‘socially optimal’.

– More damning, they prove their theory by modelling a financial system that includes just two banks, consequently suppressing any opportunity for exposure diversification. This is completely unrealistic. Banks have tens of other banking counterparties and already factor in their counterparty assessment the possibility that capital policy might change. But thanks to the diversification effect, those changes usually only affect them at the margin. A two-bank model does not capture this critical point. To be fair, those economists do admit that their model is a simplified one. Yet this admittedly weak theoretical basis does not seem to make them think twice before making policy recommendations.

– Finally, this paper falls into the same Nirvana fallacy trap as the first one reviewed above: they do not make a convincing case that an effective alternative exists, and assume away Public Choice issues by relying on the actions of omniscient regulators.

This is the sorry state of affairs in current economic research. By focusing on highly theoretical mathematical models based on very limiting – if not totally unrealistic – assumptions, damaging policy recommendations are outlined and subsequently serve as justifications for regulators’ actions. Clearly, nothing much has changed since the days of Diamond and Dybvig’s flawed model (see White, The Theory of Financial Institutions, 1999). As I recently pointed out in the case of macroprudential research, this is a reminder that it is critical to read research papers’ body (and not only their abstracts and conclusions). Sadly few people seem bothered do so.

Are we all macroprudentialists now?

Update: a revised and extended version of this post was published on Alt-M here

The title of this post (and a reference to Milton Friedman’s famous quote) is also the headline of a recent speech by Klaas Knot, President of the Netherlands Bank, and to which he answers:

In the spirit of Pentti’s thinking my answer is:  Yes – as long as we stay eclectic, pragmatic and flexible. And we take the interactions of monetary and macroprudential policies into account, and coordinate the two policies.

While there is some truth to the second half of his answer – that monetary and macropru do interact, I find myself very uncomfortable with its first half: if we are all macroprudentialists now, we are heading for disaster.

As highlighted on this blog a number of times:

  1. There is barely any evidence that macropru has any effect (see also here and here) but on the other hand it ‘leaks’ and does have distortive impacts on the allocation of credit in the economy.
  2. It cannot counteract the effect of monetary policy.
  3. It opens the door to ‘bureaucratic tyranny’ (as John Cochrane said) and assumes away all Public Choice issues.
  4. It assumes omniscient regulators and rejects the conclusions of the socialist calculation debate or the insights of Hayek’s concept of knowledge dispersion.

But as research pieces presented last September at the BIS/Central Bank of Turkey seminar on macroprudential regulation demonstrate, group think is widespread: economic researchers include many, many important and explicit caveits and limitations within the core text of their papers; yet seem to suddenly ‘forget’ them once it is time to write both the abstract and the conclusion of the same papers.

Out of 19 papers, only one refers to some of the issues listed above and questions some the fundamentals behind macropru reasoning (Bálint Horváth and Wolf Wagner’s Macroprudential policies and the Lucas Critique, an interesting read). Many others, on the other hand, question the very fundamentals of a market economy: each agent fails to ‘internalise’ the damages that his/her actions have on the market and economy as a whole. Therefore, an external regulator needs to intervene in order to control the agent’s actions and stabilise the overall economy.

This is absurd. The same reasoning could be applied to any good: an agent overproducing a certain good fails to ‘internalise’ the damages it causes to the market and his industry. As a result, this industry needs a central planner to organise it in the most efficient way. We know the fallacy of this logic: government failures are worse and more systematic than market failures. Yet this view prevails in today’s macropru theoretical foundations.

This makes me think that Knot’s speech is an example of moderation in today’s central bank school of thought. Take this recent speech by Alex Brazier, Executive Director for Financial Stability Strategy at the BoE, during a financial regulation seminar at the London School of Economics. It is quite remarkable in the way that it manages to avoid referring to any of the issues listed above (and even contradicts point 2 despite the evidence), depicts macropru as an almost ideal framework, and exemplifies central bankers’ fundamental distrust in free markets.

See this statement for instance:

It’s well known, for example, that banks would choose to have too little capacity to absorb losses – too little equity capital – because their current shareholders don’t bear the full economic costs of their failure or distress.  The economy needs better capitalised banks than the free market would deliver.

“Well known”? This makes little sense. This statement is a negation of all historical experiences of stable financial systems during which there was no regulator in place dictating capital requirements.

More perplexing are his later statements about capital buffers:

The results have been transformative.  A system that could absorb losses of only 4% of (risk weighted) assets before the crisis now has equity of 13.5% and is on track to have overall loss absorbing capacity of around 28%.

This is wrong. As we recently saw, banks had regulatory Tier 1 ratios of 8 to 9% before the crisis. Not 4%, which was the regulatory minima. Mr Brazier is therefore comparing the pre-crisis regulatory minima to bank’s current average capital buffer.

More surprisingly, Brazier’s speech includes a whole part attempting to convince us that “clairvoyance is not a reasonable standard to be held to”, and that “[regulators’] mandate is to break free of the shackles of forecasting, to free us from trying to predict if the economy will turn down, and to apply economic analysis to the question of how bad it could be if it did.”

Macroprudential tools are discretionary policies that are supposed to be applied in a countercyclical manner: they are set in such a way to specifically reflect regulators’ forecasts (or beliefs) about where the economy – or certain segments of the economy – is going. Therefore, how on Earth is macropru supposed to work if regulators do not even attempt to understand how the economy is evolving or where the imbalances are building up in the first place? This is all very confusing.

And this is the issue. It is bewildering to see some economists, and virtually the whole central banking and bank regulatory profession, tell us that, despite all its shortcomings, unknowns, internal contradictions and inherent risks and distortions, macroprudential regulation is the way forward. (and to make things worse, macropru tools are not new and have been used for a while now, especially by emerging markets, with limited effectiveness – see chart below, taken from one of the seminar papers)

Macropru policy index

I know we apparently now live in an ‘alternative facts’ world, but academia is supposed to focus on evidence. The amount of research attempting to find a way to control the economy and financial markets through discretionary fluctuations of macroprudential tools is reminiscent of the post-WW2 Keynesian push which, as we know, didn’t end well.

Milton Friedman would have certainly not taught his students that “we are all macroprudentialists now”.

PS: thankfully Kevin Dowd is also supposed to speak at this seminar later this year, which should dispel some of the myths that are being spread

AI: regulatory arbitrage on steroids?

Pretty much everyone has heard of Fintech by now, but a more focused approach to applying new IT technologies to banking is now the nerdier Regtech. Regtech aims at applying Fintech for regulatory and compliance purposes, simplifying a process that has caused headaches to bankers due to the exponential growth of the rulebook they had to follow, and which has also been a pain on the cost side given the number of extra compliance officers they had to hire in an era of lower revenues.

Indeed, the FT reports that:

Citigroup estimates that the biggest banks, including JPMorgan and HSBC, have doubled the number of people they employ to handle compliance and regulation. This now costs the banking industry $270bn a year and accounts for 10 per cent of operating costs. […]

Spanish bank BBVA recently estimated that, on average, financial institutions have 10 to 15 per cent of their staff dedicated to this area. This heavy investment has been necessary in response to the crackdown by regulators that followed the financial crisis. European and US banks have paid more than $150bn in litigation and conduct charges since 2011, Citi estimated.

What’s the solution? ‘Regulatory technology’:

New technologies mean that banks could make vast savings in compliance, according to Richard Lumb, head of financial services at Accenture, who estimated that “thousands of roles” in the banks’ internal policing could be replaced by automated systems.

Many of recent Regtech developments involve the use of artificial intelligence to simplify compliance issues that are very burdensome from a staff (and cost) perspective. As Deloitte outlines here (and see an interview on the Financial Revolutionist about applying machine and deep learning to investment strategies here):

The Institute of International Finance (IIF) highlights AI, among others, as it has a range of applications in regulatory compliance and reporting. It can be used in analysing complex trading relationships, trading schemes, patterns and communications between banks, exchanges and other market participants. AI can also be employed to monitor internal conduct and communication to clients, comparing it to quantitative metrics such as supervisory input. As AI relies on computer-based modelling, scenario analysis and forecasting, it can also help banks in stress testing and risk management.

But what I find particularly interesting is this bit:

Another field for AI in financial regulations is to simplify the regulations themselves: there are a multitude of different jurisdictions, products, institutional differences and enforcement mechanisms and it is hoped that AI systems are better in collecting and categorizing them according to rules.

Similar points in an Economist article published a few months ago about Watson, IBM’s AI product:

The next area is to provide clarity about rules. They are sorted by jurisdictions, institutional divisions, products and so forth, and then further broken down between rules and guidance. Watson is getting better at categorising the various regulations and matching them with the appropriate enforcement mechanisms. Its conclusions are vetted, giving it an education that should improve its effectiveness in the future. Promontory’s experts are expected to help Watson learn. A dozen rules are now being assimilated weekly. Thousands are still to go but it is hoped the process will speed up as the system evolves. Ultimately, IBM hopes speeches by influential figures, court verdicts and other such sources will be automatically uploaded into Watson’s cloud-based brain. They can play a role in determining what regulations matter, and how they will be enforced.

Below is a useful chart showing all current Regtech areas and start-ups (you can also find it here):

regtech

While the industry has not explicitly said it this way (and probably never will), it seems to me that we’re on our way to AI-driven regulatory arbitrage. Once those systems are ready, AI will be able to navigate through the thousands of regulatory pages and extract the most effective ‘regulatory optimisation strategy’ within and across borders.

If all AI systems used by financial institutions reach the same conclusion, this could lead to a build-up of imbalances and systemic risks that could eventually trigger a crisis, following a process similar to that which contributed to the latest financial crisis: Basel rules facilitated the accumulation of imbalances in the credit market towards real estate lending.

It of course remains to be determined whether AI systems reach the same conclusions in the end. But this is likely to happen, for the following reasons: 1. banks whose systems are less effective will progressively attempt to catch up with the competition, leading to harmonisation in the design of those systems and 2.  if AI solutions are provided by third-party firms, harmonisation will occur from the start.

A glimpse of hope remains in that the optimal regulatory arbitrage strategy may be different for financial institutions with different business models (mortgage banks vs. universal banks for instance). But let’s not hold our breath: even in this case, imbalances would still occur and universal banks still account for most of the world’s banking assets by far.

For now, explicit regulatory ‘optimisation’ does not seem to be included in the chart above (although the ‘Government/Legislation’ category could well evolve into a more arbitrage-oriented segment). But how long before it does?

Clarifying confusions on capital requirements

As the Trump administration is considering scrapping parts of the enormous Dodd-Frank act, a number of media and economists look alarmed: Dodd-Frank made the American banking system safer, the argument goes, and getting rid of it would lead to another financial crisis.

While long-time readers of this blog know that Dodd-Frank, and the Basel 3 international accords it is based on, merely continue the mistakes of three decades of regulatory overreach that have brought about the largest financial crisis in decades, I thought it was necessary to clarify a couple of points regarding capital requirements.

In this week’s Economist, two articles seem to admit that, while the act indeed represented an unclear regulatory monster of thousands of pages that mostly penalised smaller financial institutions, it also made the system safer by reinforcing banks’ capitalisation.

In an editorial, the newspaper asserts that:

Onerous though it is, however, the act also achieved a lot. Measures to beef up banks’ equity funding have made America’s financial system more secure. The six largest bank-holding companies in America had equity funding of less than 8% in 2007; since 2010 that figure has stood at 12-14%.

In another article, it adds:

Thanks in part to Dodd-Frank, America’s banks are far safer than they were: the ratio of the six largest banks’ tier-1 capital (chiefly equity) to risk-weighted assets, the main gauge of their strength, was a threadbare 8-9% before the crisis; since 2010 it has been 12-14%.

But it is far from clear that Basel requirements are behind banks’ post-crisis thicker capital buffers. See Basel 3 minimum capital requirements below:

Basel 3 Timeline

Minimum Tier-1 capital requirements are 7.875% (Tier 1 + capital conservation buffer). This is around 2008 level for large US banks. Hardly an improvement at first glance then.

However, let’s also add the recent SIFI capital surcharge, published by the FSB last November: only two institutions qualified for a 2.5% surcharge (only one of them US-based), but let’s add it this figure to our minimum above. We get to a SIFI minimum Tier 1 requirement of 10.375%. This is still an almost 2 to 4% gap with the 12% to 14% average referred to by The Economist above.

Therefore the only conclusion is that there are other parameters and considerations influencing the level of capital ratios upward. One of those parameters is indeed regulatory-related, but is discretionary at bank-level: it is bankers’ own view about the capital buffer they believe they need above the regulatory minimum in order to avoid breaching it in case of sudden large losses. This shows some of the perverse side-effects of strict minima, and I described some time ago that the ‘effective’ capital ratio was actually the differential between the level maintained by the bank and the regulatory minimum. And this ‘effective’ buffer tends to narrow rather than thicken as minima are raised.

The second is exogenous to bank’s decision making-process: the financial crisis has taught a number of investors not to get fooled by headline regulatory capital ratios. Consequently, investors now ask for higher levels of capital in order to compensate for the lack of clarity regarding the quality of capital*. Given that risk-weights (another regulatory construct) have a considerable influence on the level of capital ratios, investors also ask for extra capital buffers to compensate for the distortions they inevitably introduce in the headline figures.

Consequently, had minimal requirements stayed the same, investors would have been highly likely to demand extra protection against the uncertainty introduced by…..those same regulatory requirements.

In the end, the assumption that banks are much better capitalised and that regulation/Dodd-Frank is responsible for this is questionable.

 

*While Basel 3 and Dodd-Frank have indeed also touched upon the issue of capital quality, it remains unclear how a number of so-called hybrid, or ‘complementary Tier 1’, instruments will perform under stress and legal challenges.

PS: this blog post could  have entered into a lot more details about the parameters driving the thickness of capital buffers, but it would then have to be split into 3 or 4 different posts. At least. So please read some of my other posts on the topic to get the bigger picture as this is a complex issue.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Dizzynomics

Finding patterns in finance, econ and technology -- probably where there are none

Alt-M

When financial markets spontaneously emerge through voluntary human action

Pumpkin Person

The psychology of horror

Uneasy Money

Commentary on monetary policy in the spirit of R. G. Hawtrey

Spontaneous Finance

When financial markets spontaneously emerge through voluntary human action

ViennaCapitalist

Volatility Is The Energy That Drives Returns

The Insecurity Analyst

When financial markets spontaneously emerge through voluntary human action

Sober Look

When financial markets spontaneously emerge through voluntary human action

Social Democracy for the 21st Century: A Realist Alternative to the Modern Left

When financial markets spontaneously emerge through voluntary human action

EcPoFi - Economics, Politics, Finance

When financial markets spontaneously emerge through voluntary human action

Coppola Comment

When financial markets spontaneously emerge through voluntary human action

Lend Academy

Teaching the World About Peer to Peer Lending

Credit Writedowns

Finance, Economics and Markets

Mises Institute

When financial markets spontaneously emerge through voluntary human action

Paul Krugman

When financial markets spontaneously emerge through voluntary human action

Free exchange

When financial markets spontaneously emerge through voluntary human action

Moneyness

When financial markets spontaneously emerge through voluntary human action

Cafe HayekCafe Hayek - where orders emerge - Article Feed

When financial markets spontaneously emerge through voluntary human action