A few links for the weekend

Here are a selection of a few interesting articles for you to read on the weekend.

Free banking: the limits of mathematical models

Theoretical economic worlds are so nice. Only equations and equilibria, and no need to bother about empirical evidences or simply historical facts: you design your nice imaginary world and you reach conclusions from it. Conclusions that have the potential to influence policymaking or economic teaching.

Equation Fed PaperA new paper produced by a Philly Fed economist illustrates exactly that (see one of its nice systems of equations above). The paper is titled On the inherent instability of private money. Here is the abstract (my emphasis):

A primary concern in monetary economics is whether a purely private monetary regime is consistent with macroeconomic stability. I show that a competitive regime is inherently unstable due to the properties of endogenously determined limits on private money creation. Precisely, there is a continuum of equilibria characterized by a self-fulfilling collapse of the value of private money and a persistent decline in the demand for money. I associate these equilibrium allocations with self-fulfilling banking crises. It is possible to formulate a fiscal intervention that results in the global determinacy of equilibrium, with the property that the value of private money remains stable. Thus, the goal of monetary stability necessarily requires some form of government intervention.

That’s it. He just validated the existence of central banking. No need to go any further, the mathematics just demonstrated it: private currencies are unstable and we need government intervention for the better good.

What’s interesting though is that this paper does not contain a single reference to the now relatively large free banking literature of the likes of White, Selgin, Horwitz, Dowd, Salter, Sechrest, Cachanosky… Which, you’d admit, is curious for a paper discussing precisely that topic. Perhaps this would have helped him avoid the embarrassment of discovering that historical reality was, well, the exact opposite of the conclusions his equations reached. That in fact, private currency-based systems had been more stable than monopoly issuance-based ones (see here for the track record, but everywhere on this blog for other evidences, as well as numerous papers and books such as Selgin’s The Theory of Free Banking: Money Supply under Competitive Note Issue).

Coincidentally, George Selgin published a new post a couple of days ago criticising the current state of monetary economics which, in his opinion, rely too much on abstract maths and not enough on historical evidence. Ben Southwood also mentioned this paper, along with the fact that even ‘far from perfect’ free banking systems (i.e. the 19th century US experience) outperformed central banking ones. He also asks a very good question:

My real issue is why this evidence isn’t breaking through? Why are so many smart, knowledgeable people opposed to free banking? Why is the ruling tendency now towards practically outlawing bank/debt finance altogether in favour of steps toward equity financing everything? I don’t have a good answer.

This is also something that worries me. Why does a paper on free banking not reference (let alone discuss) a single free banking paper or book? Why is this literature avoided? Is it inconvenient? Unless ignorance is the culprit, despite the fact that quite a few articles show up after a quick Google search for the terms ‘free banking’ or ‘competitive private note issuance’. What’s wrong with the mainstream academic world?

Banking regulation gives P2P lending a major boost

A few weeks ago, I mentioned a new KPMG report describing the evolution of the current bank regulatory framework. The consultancy published its ‘Part 2’ a couple of weeks ago and it is interesting reading.

KPMG effectively reaches similar conclusions to the ones of this blog: the current regulatory framework makes it uneconomic for banks to extend credit to corporates (small to large), and the structural separation of investment and retail banking activities is nonsense.

In the case of corporate lending, KPMG points out that “many SMEs are disillusioned with banks, leading them to seek alternative channels of borrowing, including peer to peer lending.” This sounds spot on: regulation has always been self-defeating by driving financial activities into the shadows. And, coincidentally, Morgan Stanley just published a large report on P2P lending (which they call ‘marketplace lending’ as it’s not really P2P anymore…) forecasting that it could reach 10% of total unsecured consumer and SME lending in the US by 2020 (with other countries, in particular the UK or China, to follow).

MS P2P Lending Growth Forecast

Perhaps this is the key to unlocking corporate/SME lending growth and getting rid of this secular stagnation theory.

Partly mirroring the arguments I developed in a series of posts starting here, KPMG’s arguments against the structural separation of the various activities of banking are worth reproducing here in whole:

KPMG Structural Separation

KPMG Structural Separation 2

Further evidence of regulatory distortion in Standardised and IRB frameworks

On his new blog Alt-m, George Selgin points to a piece of academic research published last year about ‘The Limits of Model-Based Regulation, from Behn, Haselmann and Vig. This paper is very interesting and illustrates quite well how regulatory capital ratios are distorted by the use of math models encouraged by Basel 2 and 3 regulations. It nevertheless suffers from a few questionable conclusions, although those remain minor and do not affect the quality of the rest of the research.

As I described a long time ago (and also summarised in this paper), banks can calculate the risk-weighs they apply to their assets based on a few different methodologies since the introduction of Basel 2 in the years prior to the crisis. Under the ‘Standardised Method’ (which is similar to Basel 1), risk-weights are defined by regulation. Under the ‘Internal Rating Based’ method, banks can calculate their risk-weights based on internal model calculations. Under IRB, models estimate probability of default (PD), loss given default (LGD), and exposure at default (EAD). IRB is subdivided between Foundation IRB (banks only estimate PD while the two other parameters are provided by regulators) and Advanced IRB (banks use their own estimate of those three parameters). Typically, small banks use the Standardised Method, medium-sized banks F-IRB and large banks A-IRB. Basel 2 wasn’t implemented in the US before the crisis and was only progressively implemented in Europe in the few years preceding the crisis.

So what are the effects of those different regulatory capital frameworks? First, they found that

At the aggregate level, we find that reported probabilities of default (PDs) and risk-weights are significantly lower for portfolios that were already shifted to the IRB approach compared with SA portfolios still waiting for approval. In stark contrast, however, ex-post default and loss rates go in the opposite direction—actual default rates and loan losses are significantly higher in the IRB pool compared with the SA pool. […]

The loan-level analysis yields very similar insights. Even for the same firm in the same year, we find that both the reported PDs and the risk-weights are systematically lower, while the estimation errors (i.e., the difference between a dummy for actual default and the PD) are significantly higher for loans that are subject to the IRB approach vis-a-vis the SA approach. […]

Interestingly, we find that the breakdown in the relationship between risk-weights and actual loan losses is more severe the more discretion is given to the bank: while the same patterns are present for both F-IRB and A-IRB portfolios, the results are much more pronounced for loans under the A-IRB approach, which is clearly more complex and accords more autonomy to the bank.

This is pretty interesting. This demonstrates that Basel 2 (and 3) rules provide incentives to game regulatory reporting in order to maximise RoE (more on this below).

They also noticed that:

[On aggregate] to dig deeper into the mechanism, we examine the interest rate that banks charge on these loans, as interest rates give us an opportunity to assess the perceived riskiness of these loans. Interest rates in the IRB pool are significantly higher than in the SA pool, suggesting that banks were aware of the inherent riskiness of these loan portfolios, even though reported PDs and risk-weights did not reflect this. Putting it differently, while the PDs/risk-weights do a poor job of predicting defaults and losses, the interest rates seem to do a better job of measuring risk. Moreover, the results are present in every year until the end of the sample period in 2012 and are quite stable across the business cycle. […]

[Moreover, at granular loan-level] the interest rates charged on IRB loans are higher despite the reported PDs and risk-weights being lower.

This is also interesting, although I suspect partially wrong (and to be fair, they do point that out). Indeed, the German banking system is very peculiar, with small public-sector and mutual banks (likely on Standardised) having a very large market share and usually able to underprice much larger commercial banks (likely on IRB) thanks to the lack of pressure on them for profitability. Interest rates are thus likely to be understated for Standardised banks. The only way to confirm the researchers’ feeling that IRB banks charged more than Standardised ones because they knew their portfolio to be riskier is to re-run the analysis on a country with a more ‘standard’ banking system.

SA IRB 1

What is their conclusion?

All in all, our results suggest that complex, model-based regulation has failed to meet its objective of tying capital charges to actual asset risk. Counter to the stated objective of the reform, aggregate credit risk of financial institutions has increased. […]

Our results suggest that simpler rules may have their benefits, and encourage caution against the current trend towards higher complexity of financial regulation.

I cannot but only agree with this statement, despite having some doubts about the validity of their interest rate argument as explained above.

However, I will have to differ with the researchers on one particular point: that the differences we see between Standardised and IRB banks is mostly due to banks trying to game the system. Their reasoning is as follows: Basel 1 had excessively strict risk-weights, leading to ‘distortion in lending’ (absolutely agree). But the flexibility provided to banks by Basel 2’s model-based framework gets rid of this distortion and the issues described above are pretty much due to banks only (this is where I disagree).

Why do I disagree? Because of reasons I have explained before, and that are also explained within this paper: regulators validate models. Consequently, models are biased to match regulators’ expectations and biases in the first place: as it is very unlikely that regulators will consider corporate lending as less risky than real estate lending, they are also unlikely to validate a model that would do exactly this (or at least narrow the risk parameter differential). As a result, for capital optimisation purposes, banks tend to exacerbate the risk differential between those two lending types (or at least maintain the original Basel 1 risk-weight differential), in order to get regulatory approval*.

So what we’re left with is an increasingly opaque regulatory system that incentivises banks to optimise capital usage with the actual support of regulators (often against shareholders), distorting credit allocation in the meantime. Sounds effective.

*And this is exactly what the authors of this piece say!

Risk models were certified by the supervisor on a portfolio basis, and supervisors delayed the approval of each model until they felt comfortable about the reliability of the model. […]

Banks have to validate their models on an annual basis and adjust them if their estimates are inconsistent with realized default rates (see also Bundesbank 2003). Further, risk models have to be certified by the supervisor and banks have to prove that a specific model has been used for internal risk management and credit decisions for at least three years before it can be used for regulatory purposes.

PS: They also provide the following interesting chart. The same way that the introduction of RWAs triggered a real estate lending boom, at the expense of corporate lending, the introduction of Basel 2 led to a lending differential between IRB banks, which could optimise capital usage, and Standardised banks, potentially exacerbating the original real estate/corporate lending dichotomy introduced by Basel 1.

SA IRB 2

PPS: Sorry not many update recently despite having quite a lot to say… I just can’t seem to find the time to write those posts for some reason.

Modeling a Free Banking economy and NGDP: a Wicksellian portfolio approach (guest post by Justin Merrill)

My friend Alex Salter and his coauthor, Andrew Young, have an interesting new paper called “Would a Free Banking System Target NGDP Growth?” that I believe was presented at a symposium on monetary policy and NGDP targeting.

I too have wondered the same question. I believe there are real reasons why a dynamic economy might not have stable NGDP. One reason is demographic changes (maybe target NGDP per capita?). Another reason is problems with GDP accounting in general such as the underground economy, changes in workforce participation of women and the vertical integration of firms. Another micro-founded effect might be the income elasticity of demand and substitution effects. But even abstracting from these problems, it is still a worthy question to ask if monetary equilibrium is synonymous with stable NGDP and its relationship to free banking. If they are synonymous, we might expect stable NGDP from free banking. In my paper on a theoretical digital currency called “Wixle” I outline a currency that automatically adjusts its supply to respond to demand by arbitraging away the liquidity premium over a specified set of securities. This is a way to ensure monetary equilibrium without regard for aggregate spending, which is particularly useful if the currency is internationally used.

A small criticism I have of my free banking and Market Monetarist friends is that they often assert that monetary equilibrium and stable NGDP are the same thing, usually by applying the equation of exchange. As useful as the equation of exchange is, it is tautologically true as an accounting identity. But just as we know from C+I+G=Y, accounting identities’ predictive powers are limited when thinking about component variables. I have argued for the conceptual disaggregation of the money supply and money demand, because the motives for holding currency and deposits are different and the classification of money is more of a spectrum. So I was pleased to see that Salter and Young did this in their paper and added the transaction demand for money into their model. This leads them to conclude that a free banking system will respond to a positive supply shock, which results in an increased transaction demand for money, by stabilizing the price level rather than NGDP. This might be true, and whether this is good or bad is another question. Would this increase in currency lead to a credit fueled boom, or is this a feature and not a bug?

I have long been upset with the way that economists overly focus on reserve ratios and net clearings from a quantity perspective. This abstracts away from the micro-foundations of the banking system and ignores the mechanics of banking. This is the point I made at the Mises Institute when I rebutted Bagus and Howden. My moment of clarity for the theory of free banking actually came from reading the works of James Tobin and Gurley & Shaw, as well as Knut Wicksell. The determination of the money supply is the public’s willingness to hold inside money, and this willingness creates the profit opportunity for the financial sector to intermediate by borrowing short and lending long. I believe the case for free banking can be made more robust by adding the portfolio approach, as well as the transactions approach. I will outline here what that would look like without sketching a formal model.

The Model is a three sector economy: households, corporations and banks. Households hold savings in the form of corporate and bank liabilities and have bank loans as liabilities. Corporations hold real capital, bank notes and deposits as assets and bank loans, stocks and bonds as liabilities. Banks hold reserves, securities and loans as assets and net borrowed reserves, notes, deposits and equity as liabilities.

JM 1

Households can hold their wealth in risky securities or safe, but lower yielding interest paying deposits that pay the risk-free rate in the economy or non-interest paying notes used for transactions. The model could include interest-free checking accounts, but these are economically the same as notes in my model.

Banks can then choose to invest in loans, securities or lending reserves. They fund investments largely by borrowing at the risk-free rate and borrowing reserves at the margin. Logically then, the cost of borrowed reserves will be higher than deposits but lower than that of loans and securities and arbitraging ensures this. If the cost of reserves goes above the return on securities, banks will sell bonds to households and lend reserves to each other. If the cost of reserves goes below the rate on deposits, banks will borrow reserves and deposit with each other. The return on loans and securities (adjusted for risk) will tend towards uniformity because they are close substitutes. Also, as Wicksell pointed out, if loan rates are below the return on securities or the return on real capital, households and firms would borrow from banks and invest.

Empirical evidence for the interest rate channels is provided here. Interestingly, the rules set out above were only violated in times of monetary disequilibrium, such as the Volcker contraction:

http://research.stlouisfed.org/fred2/graph/?g=1aRY

JM 2

The natural rate of interest is equal to the return on assets for corporations. Most economists that try to model the natural rate mistakenly do it as the risk free rate or the policy rate. This is a misreading of Wicksell since he identified the “market rate” as the rate which banks charge for loans, and the important thing was the difference between the market rate and the natural rate. If the market rate is too low, people will borrow from banks and invest, increasing the money supply.

We can now apply the framework to the CAPM model and conceptualize the returns on various assets:

JM 3

The slope of the securities market line (SML) is determined by the risk aversion/liquidity preference of the public. Should the public become more risk averse and demand a larger share of their wealth be in the form of money, they will sell securities in favor of deposits. If in aggregate, the household sector is a net seller, the only buyers are banks (ignoring corporate buybacks since this doesn’t change the results since corporations would end up needing to finance the repurchases with bank loans). So the banking sector would purchase the securities (at a bargain price) from households, crediting their accounts and simultaneously increasing the inside money supply. This becomes more lucrative as the yield curve steepens or other kinds of risk premia widen, increasing the net interest margins (NIMs). As the banking sector responds to changes in demand it equilibrates asset prices.

JM 4

This is another way of coming to the same conclusion: that a free banking system would tend to stabilize NGDP in response to endogenous demand shocks. But how about supply shocks? We know that when the spread between the banks’ return on assets and costs of funding widens, the balance sheet will increase. An increase in productivity will raise both the return on new investments and the rate the banks have pay on deposits. We can assume for now these cancel out. But the public will have a higher demand for notes, and since notes pay no interest, they are a very cheap source of funding. This lowers the average cost of funding overall. However, more gross clearings will increase the demand for reserves and their cost of borrowing relative to the yield on other assets. This would put a check on overexpansion and excess maturity transformation. The net effect on the total inside money supply is uncertain, but probably positive assuming the amount of currency held by the public is larger than borrowed reserves by banks.

Another thing to consider about supply shocks: despite the lower funding costs of increased note issuance, an increase in the natural rate of interest will decrease banks’ net interest margins because their loan book will be locked in at the old, lower rate, but the rate on deposits will have to go up. This is a counter-cyclical effect (in both directions) that may outweigh the transaction demand effect. Another possible counter-cyclical effect is the psychological liquidity preference effect that accompanies optimism associated with supply shocks. So in a strong economy individuals will be more willing to hold the market portfolio directly, which flattens the SML. Depending on the strength of these effects, it may lead to different results than Salter and Young.

The Economist gets it wrong, again

About 10 days ago, The Economist published three articles on General Electric and mixing industrials with banking. Those articles follow GE’s decision to divest its banking business, GE Capital.

In its editorial (the follow-up article is available here), The Economist declares that:

GE shows why industrial firms should avoid owning big finance operations. Occasional successes such as Warren Buffett’s Berkshire Hathaway can combine insurance with hot dogs. But most manufacturers are even worse at managing financial risk than banks are—and they are harder to supervise. A blow-up at the finance arm can sink the entire company.

In another short article, the newspapers attempts to warn industrial CEOs not “to turn your firm into Goldman Sachs”:

The case for a split is clear. Managers are even worse at dealing with financial risk than bankers are. A blow-up in a firm’s financial arm can hurt its main business. And giving tycoons access to savers’ cash can lead them into all sorts of temptation.

Well, that’s not really true. What The Economist is describing is the situation in which a few large companies have been allowed to set up banking arms. This is indeed a situation to avoid as it limits entrants in the market and as a result gives an artificially high market share to those who could set up banking activities, which often transform into TBTF entities and put their parent company at risk if ever they fail. In the end, you end up with a few huge banks and a few very large banking arms owned by non-financial corporations. But this is not a free market outcome. The Economist is suggesting that we restrict banking to banks. It will not make the TBTF problem disappear but reinforce it.

We want the exact opposite of what The Economist is describing. We want every single company, every Google, Amazon, Apple or Walmart, to be able to set up a banking or finance arm if it wishes to, as long as it believes it can convince customers that it is able to provide a cheaper and more efficient alternative/product*. The more banks on the market the more likely market shares are going to be granular and the balance sheet of each entity limited in size. In turn, this competitive landscape would make the TBTF issue disappear and customers benefit. Finally, as each banking entity remains relatively small under competitive pressure, it is also less likely to endanger the financial health of its parent company (or of the whole banking system) if ever it collapses.

*And many of those companies are already entering the financial systems, in particular in the payment space, with some success.

ETFs and market efficiency – some evidence

A few weeks ago, I speculated that the rise of ETFs negatively impacted market efficiency. At that time I had not heard of any research that provided evidences to confirm or dismiss my fears. Not anymore.

A few articles (see here, here, here and here) have reported that a recent Goldman Sachs equity research piece (titled ETFs: The Rise of the Machines) found that ETFs had more influence than previously believed on share prices. I haven’t had access to this GS report, but here’s an extract from one of the articles:

Are exchange-traded funds an unseen force, like gravity, that help determine stock-price moves? New research suggests that the rise of ETFs may be complicating stock pickers’ chances of selecting winners or losers. That could make it even harder for stock-fund managers to outperform their benchmarks as assets in ETFs grow.

The $1.2 trillion in U.S. stock ETFs is having a much larger impact on the market than the fund industry claims, according to a recent report from Goldman Sachs. At issue: Heavy trading of index-tracking ETFs appears to be herding individual stocks up or down together, particularly in niche industries such as real estate and mining.

Goldman’s equity research team contends that increases in ETF trading appear to be tightening correlations, or the tendency for individual stocks and sectors to move up or down in lock step, regardless of a company’s fundamentals.

This was precisely my point in my previous post. But the extent of this ‘ETF distortion’ is hard to measure:

Comprehensive data aren’t available, but a study last year by the Investment Company Institute estimated that only 9% of ETF trades trigger buying or selling in individual stocks. Goldman, however, assumes the number is much higher, closer to 50% in some sectors.

As ETFs keep growing as an asset class, it is likely that those effects are going to be exacerbated. Perhaps we need even more activist investors to bring some balance back to the Force.

Research review (2/2): effectiveness of macro-prudential policies

A couple of days ago I said I had a second paper to review (yes I originally said ‘tomorrow’, but circumstances changed, sorry about that). This paper by McDonald was published last month by the BIS under the title When is macroprudential policy effective? (also available on SSRN here.) I didn’t find it very convincing.

The author runs correlations between the implementation of macro-prudential policy measures and when in the housing cycle those measures occur:

One of the aims of this paper is to determine if loosening measures are ineffective because they are often implemented during downturns. In particular, I examine whether tightening and loosening measures have the same effect once you control for where in the cycle changes are made.

This is a laudable goal but the study doesn’t seem to actually do this. Indeed, the author concludes that (my emphasis):

The results suggest that tightening LTV and DTI limits tend to have bigger effects during booms. Several measures of the housing cycle correlate with the effects of changing LTV and DTI limits; annual housing credit growth and house-price-to-income ratios are some examples. Loosening LTV and DTI limits seems to stimulate lending by less than tightening constrains it. The difference between the effects of tightening and loosening is small in downturns though. This is consistent with loosening being found to have small effects because of where it occurs in the cycle.

This is not what I see from his dataset. It seems to me that, if house price to income ratios start falling after a macro-prudential tool is put in place, it is simply because the housing market is reaching its peak. Moreover, this effect only occurs from time to time. See the charts below. Red dots represent tightening macro-prudential measures. In the four housing markets considered, only a few red dots were followed by declining house price to income ratios. Many others were actually followed by…a house market boom. And the cyclical nature of the housing market does not seem to rely on an external regulator fixing macro-prudential thresholds. Prices fall by themselves…when they start getting too expensive.

Macropru LTV DTI 1

Same thing for other markets (fewer data points though):

Macropru LTV DTI 2

From those datasets it is clear that the correlation between tightened macropru ratios and constraining effects on housing markets is weak.

Even on aggregate, the author’s own chart doesn’t seem to match his claims:

Macropru LTV DTI 3

A second study (The Use and Effectiveness of Macroprudential Policies: New Evidence) by Cerutti, Claessens and Laeven, published by the IMF last month as well, partly reflects what Aiyar, Calomiris and Wieladek had said in a past paper: macro-prudential policies leak. While both papers found some reduction in credit growth following the introduction of a macropru tool, they also noticed a tendency of market actors to take avoidance measures: the previous paper noticed a parallel growth in shadow banking, and the new IMF paper noticed increased cross-border lending growth.

Their conclusion clearly does not support regulators’ and central bankers’ hopes that macro-prudential policies could help offset the negative effects of low interest rates on some asset markets*:

We find that policies are generally associated with reductions in the growth rate in credit, with a weaker association in more developed and more financially open economies, and can have some impact on growth in house prices. We also show that using policies can be associated with relatively greater cross-border borrowing, suggesting countries face issues of avoidance. We do find evidence of some asymmetric impacts in that policies work better in the boom than in the bust phase of a financial cycle.

It seems to me that many researchers and regulators are currently trying to convince themselves that macro-prudential measures work. If only their own datasets could back up their (albeit moderate) conclusions.

* However, just following this quote, they do add that “taken together, the results suggest that macroprudential policies can have a significant effect on credit developments.” Which is pretty much the opposite of what they conclude from the data they process…

Research review (1/2): lending rates and monetary policy

I’ve been busy and away recently so not many updates. But I’ve also read quite a few recently published research papers on banking and thought I should mention two in particular, both by BIS researchers.

The first, by Illes, Lombardi and Mizen, titled Why did bank lending rates diverge from policy rates after the financial crisis, is directly reminiscent of my posts on banks’ margin compression due to the low interest rate environment. This paper is quite interesting, in particular for the data it gathered. But, it misses the main point.

Here are some of the charts they provide, which are very similar to my own, and clearly highlights how rates did not (actually could not) follow central banks’ base rates:

Euro lending rates

According to them:

There are three reasons why bank lending rates do not reflect the behaviour of policy rates in the post crisis period. First, the policy rate is a very short-term rate, while the lending rates to business and households normally reflect longer-term loans. The spread between the lending and policy rates therefore reflects the maturity risk premium alongside other factors that determine the transmission of policy to lending rates. Second, even if we correct for the maturity risk premium using an appropriately adjusted swap rate, the adjusted policy rate is not the marginal cost of funds for banks. Third, banks obtain funds from a variety of sources including retail deposits, senior unsecured or covered bond markets and the interbank market, and these differ in nature from policy rates since they comprise a range of liabilities of differing maturities and risk characteristics

This is very true but misses the fact that margin compression remains the main factor in the breakdown of the monetary policy transmission mechanism. However, they did come close to acknowledging this fact. They built a weighted average cost of liabilities (WACL), representing the funding cost of the banking system across all funding sources (deposits, secured/unsecured wholesale funding, central bank funding). Which provides some interesting breakdowns of European banks’ funding structure as you see below (and notice how central bank funding only represents a small share of liabilities):

Euro funding structure 1Euro funding structure 2

They conclude that:

there is stronger evidence for a stable relationship between lending rate and the WACL measures we use to reflect funding costs of banks. We conclude that banks do not appear to have fundamentally changed their pricing behaviour in the post-crisis period even though bank lending surveys indicate that their credit standards have tightened since the financial crisis.

Banks’ demand deposits indeed reach the zero lower bound first, strongly reducing banks’ ability to further reduce their average costs of funds*. If lending rates had a strict relationship with funding costs, they would then stop falling at this point. Problem: a number of legacy variable rate loans (originated before rates started to fall) are calculated as central bank base rate + spread or LIBOR + spread, which continue to fall. This compresses banks’ margins, in turn endangering their profitability (you need to make revenues to pay for your non-interest operating costs…). Banks have no choice but to progressively increase the spread in variable rate loans (on new lending, and, if possible, on legacy lending). Risk premia (to cover for the cost of risk) and other factors as described by this paper are to be added on top of this compression phenomenon**.

They end their paper on a very appropriate question:

Further issues for research remain, including the question whether the effectiveness of the monetary policy transmission mechanism has been compromised by the breakdown in the relationship between policy rates and lending rates. Changes to policy rates may fulfil the Taylor principle, but retail rates may not adjust by a corresponding degree (see Kwapil and Sharler, 2010). This issue involves analysis of the relationships between policy rates, weighted average cost of liabilities and lending rates, as well as lending volumes, which we leave for further analysis

I’d add that actually, lowering the rates below a certain threshold harms banks and lending rather than helps…

Tomorrow I’ll review the second piece of research, on macro-prudential policy effectiveness.

*A fact acknowledged by this piece of research, even though they did not dig a little deeper into the accounting ramifications:

“In addition, deposit rates, which would normally be marked down along with the policy rates, have been constrained by the zero lower bound, which forced banks to reduce the mark-downs”

**To be clear, when funding cost does not fall as much as central bank’s base rates, it’s often the case that compression has been achieved (i.e. liabilities have reached the zero-lower bound, or close). When funding costs do fall (almost) as much as base rates, but the spread between lending rates and funding costs is widening, it often means that the risk premium is increasing (such as in Spain, Ireland or Italy).

Dimon on the next crisis (and fintech)

In a letter to shareholders last week, Jamie Dimon, JPMorgan’s CEO, makes a few interesting points and shows that he is aware of at least some of the tensions arising due from regulatory and fintech challenges.

jamie-dimon

He goes through a thought exercise to imagine what the next crisis might look like.

In my opinion, banks and their board of directors will be very reluctant to allow a liquidity coverage ratio below 100% – even if the regulators say it is okay. And, in particular, no bank will want to be the first institution to report a liquidity coverage ratio below 100% for fear of looking weak.

This is an excellent point, reminiscent of Bagehot’s teaching that artificial ratios and thresholds are set to trigger crises once they are violated.

In a crisis, weak banks lose deposits, while strong banks usually gain them. In 2008, JPMorgan Chase’s deposits went up more than $100 billion. It is unlikely that we would want to accept new deposits the next time around because they would be considered non-operating deposits (short term in nature) and would require valuable capital under both the supplementary leverage ratio and G-SIB.

In 19th century free banking systems in Scotland and Canada, healthy banks actively stepped in to protect the integrity of the whole banking system (which could at time be threatened when a single bank was going under). Regulation is now making this much more difficult.

In a crisis, everyone rushes into Treasuries to protect themselves. In the last crisis, many investors sold risky assets and added more than $2 trillion to their ownership of Treasuries (by buying Treasuries or government money market funds) […] But it seems to us that there is a greatly reduced supply of Treasuries to go around – in effect, there may be a shortage of all forms of good collateral […] banks hold $0.5 trillion, which, for the most part, they are required to hold due to liquidity requirements. Many people point out that the banks now hold $2.7 trillion in “excess” reserves at the Federal Reserve (JPMorgan Chase alone has more than $450 billion at the Fed). But in the new world, these reserves are not “excess” sources of liquidity at all, as they are required to maintain a bank’s liquidity coverage ratio.

This points reflects my arguments that the only effect of regulation has not been to make institutions more liquid but to silo what effectively becomes unusable liquidity. His point that excess reserves are required for the LCR is debatable though, as they could be replaced with other high-quality liquid assets (although bankers have reduced incentives to do so in an IOR and zero/negative interest rate world).

Changes in RWA and liquidity rules also particularly affect the ability of banks to extend credit and hence could increase pro-cyclicality according to Dimon:

In a crisis, clients also draw down revolvers […] – sometimes because they want to be conservative and have cash on hand and sometimes because they need the money. As clients draw down revolvers, risk-weighted assets go up, as will the capital needed to support the revolver. In addition, under the advanced Basel rules, we calculate that capital requirements can go up more than 15% because, in a crisis, assets are calculated to be even riskier. This certainly is very procyclical and would force banks to hoard capital. […]

In the last crisis, banks underwrote (for other banks) $110 billion of stock issuance through rights offerings. Banks might be reluctant to do this again because it utilizes precious capital and requires more liquidity.

Of course banks don’t literally ‘hoard’ capital (I can already hear Anat Admati from here). But what Dimon is saying is that banks would save up on scarce capital by preventing that sort of facilities from being used in the first place.

However, given what he described above, his claim that the banking system is “stronger than ever” feels quite odd.

Dimon also warns shareholders that further disruption is expected: Silicon Valley/fintech. While he says that “some payments systems, particularly the ACH system controlled by NACHA, cannot function in real time”, he points out that competitors such as Bitcoin and Paypal are coming in the payments area and that banks have to adapt to the real-time challenge. On top of that, quicker, more effective alternative lenders (read P2P and similar) are entering the market.

PS: I will however have to strongly disagree with Dimon’s claim that “America’s financial system still is the best the world has ever seen”…

Photo: Reuters/Keith Bedford

Uneasy Money

Commentary on monetary policy in the spirit of R. G. Hawtrey

Spontaneous Finance

When financial markets spontaneously emerge through voluntary human action

ViennaCapitalist

Volatility Is The Energy That Drives Returns

The Insecurity Analyst

When financial markets spontaneously emerge through voluntary human action

Sober Look

When financial markets spontaneously emerge through voluntary human action

Social Democracy for the 21st Century: A Post Keynesian Perspective

When financial markets spontaneously emerge through voluntary human action

EcPoFi - Economics, Politics, Finance

When financial markets spontaneously emerge through voluntary human action

Coppola Comment

When financial markets spontaneously emerge through voluntary human action

Dizzynomics

Finding patterns in finance, econ and technology -- probably where there are none

Lend Academy

When financial markets spontaneously emerge through voluntary human action

Credit Writedowns

Finance, Economics and Markets

Mises Institute

When financial markets spontaneously emerge through voluntary human action

Paul Krugman

When financial markets spontaneously emerge through voluntary human action

Free exchange

When financial markets spontaneously emerge through voluntary human action

Alt-M

When financial markets spontaneously emerge through voluntary human action

Moneyness

When financial markets spontaneously emerge through voluntary human action

Cafe Hayek - Article Feed

When financial markets spontaneously emerge through voluntary human action

Coordination Problem

When financial markets spontaneously emerge through voluntary human action

Consulting by RPM || Free Advice Blog

When financial markets spontaneously emerge through voluntary human action

Follow

Get every new post delivered to your Inbox.

Join 66 other followers