Dilution, Index Evolution, and the Shiller CAPE: Anatomy of a Post-Crisis Value Trap

theseusIn the first century, the historian Plutarch introduced a famous philosophical paradox.  The paradox goes like this.  A ship–”The Ship of Theseus”–was returning home to Athens from Crete.  As it sailed, the wooden planks that made up its structure gradually decayed.  The sailors kept the ship afloat by replacing the decaying planks, one by one, using fresh wood that they were carrying onboard.  Eventually, the sailors replaced all of the wooden planks that made up the the ship’s original structure, so that the new form of the ship had no material in common with the old form.  The question followed: was the ship the same ship through the change? If so, what made it the same ship, rather than a new ship, a different ship?

“For they took away the old planks as they decayed, putting in new and stronger timber in their places.  The ship became a standing example among the philosophers of the logical question of things that grow: with one side holding that the ship remained the same, and the other contending that it was not the same.” – Plutarch, Theseus, 75 A.C.E.

Approximately 1500 years later, the philosopher Thomas Hobbes took the paradox further.  He asked us to imagine the following.  All of the old, decayed wood of the original Ship of Theseus is gathered up from scrap and used to build a new ship.  There are then two ships: one ship that is spatially continuous with the original Ship of Theseus, whose material has been fully changed out, piece by piece, and another ship made from the scrap material of the original Ship of Theseus.  Which of these ships is the true Ship of Theseus? 

The “Ship of Theseus” problem frequently arises in the world of music fandom.  Consider, for example, the 1970s soft rock group, the Little River Band, which produced famous hits such as “Reminiscing” and “Lonesome Loser“.  To this day, the Little River Band remains together.  But there are no current members of the band that were in the band when it was originally formed.  All of the founding members, those who sang the hits as we are used to hearing them, have been swapped out.  A “Ship of Theseus” question thus arises: is the band that currently goes on tour as “The Little River Band” the true Little River Band, or is it the equivalent of a cover band, singing the same songs, while only pretending to be the original? To add the Hobbesian twist, what if the original members of the Little River Band were to come together to form a new band, a cover band of the Little River Band.  Would this new cover band be the true Little River Band, since it contains the founding members?  Or would it be a mere replica, since it is not continuous with the original?

You’re probably asking yourself what relevance this paradox has to finance, or to anything. But now here’s a question for you. Suppose that we have an index of stocks that represents the equity market of a given country, an index that we use, without further questioning, to draw conclusions about important topics such as the country’s valuation and expected future performance. What would happen if, like planks on the Ship of Theseus, or members of the Little River Band, most or all of the individual companies in the index were to be removed, replaced with new companies?  Would the index remain the same index? Or would it become a different index?

The question of “sameness” and “difference” is inherently metaphysical, and therefore has no answer.  But there is a more practical question that we as investors have to be concerned with.  That question is this.  Given radical changes in the constituents of an index, is it appropriate to use the index’s historical metrics–its historical earnings, growth rates, valuations, profit margins, returns on equity, and so on–to draw conclusions about what the index’s future performance is likely to be?

Ireland: The Perfect International Value Play?

Looking out over the long-term, it’s going to be very difficult for US investors to receive the “normal” 10% nominal annual equity returns that they have received historically.  Literally everything will have to go right.  Profit margins and returns on equity will have to stay elevated, contrary to the tendency of mean-reversion.  Multiples will also have to stay elevated, which means that interest rates will have to stay low.  But low interest rates are a consequence of weak economic growth and weak inflation.  How are companies going to consistently produce strong earnings per share (EPS) growth–the kind that would be needed to underpin 10% total returns for shareholders over the long-term–in an environment of weak economic growth and weak inflation?

Up to now in the current recovery, and really over the last 10 years, profit margin expansion and share buybacks have been the primary drivers of EPS growth for U.S. equities.  They are the reasons that strong EPS growth has been possible amid the persistent softness in economic growth and inflation (softness that has depressed the corporate top-line, but that has also provoked zero interest rates and an elevated P/E multiple).  Can profit margin expansion and share buybacks continue to be robust drivers of EPS growth, indefinitely, even as shares become more and more expensive for corporations to buy back, and as the income imbalances between capital and labor, the rich and everyone else, get closer and closer to the limits of economic and societal stability?  There are good reasons to think not.

Because long-term equity returns in the U.S. are likely to be sub-par, many investors have turned to foreign equity markets for better opportunities.  Where is the value in the equity world right now?  According to the Shiller CAPE, a popular technique for measuring value across economic cycles, the value is in Europe, specifically, the distressed countries of the Eurozone.

In my view, out of all of the countries of the Eurozone, the most interesting from an investment perspective is Ireland.  As a country, it has all of the features needed for strong long-term equity performance, features that many of its cousins in the Eurozone lack: a productive, highly-skilled, flexible labor force, capital-friendly, pro-business government policies, and a young, growing population in a demographic sweet spot.  To complete the investment case, Irish stocks are apparently very cheap, with the Irish index sporting a Shiller CAPE under 10.

It would seem, then, that Ireland is set up to produce spectacular returns.  But there’s a problem.  If you look closely at the actual names that make up the Irish index, you will be hard-pressed to find significant value.  The following table shows the constituents of the ISEQ 20, Ireland’s benchmark, sorted by market cap weighting as of March 2014:


Most of these companies enjoy above-average valuations.  That’s to be expected, as the companies are high-quality.  Glanbia?  Kerry Group?  Smurfit Kappa? Aryzta?  These are growing, thriving businesses.  They deserve to be priced as such.

There are two potential cases of deep value in the index: The Bank of Ireland and the building material producer CRH.  But these companies together only make up 29% of the index capitalization.  The majority of the index–71%–is composed of companies that are not deep value.  How can an index represent deep value when 71% of its constituents are not deep value?  How can an index trade at a CAPE below 10 when 71% of its constituent companies sport CAPEs significantly higher than that number?  Where exactly is the low CAPEness coming from?

Enter the Ship of Theseus paradox.  It turns out that many of the companies presently in the ISEQ 20 are new entrants, having replaced the financial roadkill that died off in Ireland’s massive housing bubble and subsequent banking crisis–roadkill that includes Anglo-Irish Bank, Allied Irish Banks, and so on.  That roadkill is gone, forever, having either been nationalized or diluted into oblivion.  But, crucially, the earnings that it generated during the bubble, from 2004 to 2008, is still part of the ISEQ’s earnings per share during those periods, and is therefore getting credited in the CAPE calculation for the index.

The following table shows constituents of the ISEQ 20 as of March 2014 alongside the constituents as of January 2007.


As you can see, there has been substantial turnover in the index.  The current ISEQ 20 has one major commercial bank, The Bank of Ireland, which represents ~9% of the index’s total capitalization and 0% of the index’s current earnings.  The ISEQ 20 of 2007, however, had three major commercial banks, which together made up ~40% of the index’s total capitalization and an even greater share of its earnings at the time.

The Shiller CAPE is a tool for detecting hidden value.  During cyclical weakness, a company’s classic trailing-twelve month (ttm) P/E ratio will be abnormally elevated by temporarily depressed earnings, and will therefore give an inaccurate picture of the company’s future earnings potential.  To get around this problem, we use the Shiller CAPE, which compares the company’s price to the average earnings that the company generated over the previous 10 years.  The 10 year average of earnings gives a more complete picture of the earnings that the company can be expected to produce in the future, when conditions return to normal.

At the index level, the same logic applies.  We compare the price of the index with the average of the index’s earnings over the previous 10 years, to get an accurate picture of the earnings that the index can be expected to produce when conditions return to normal.  In this case, however, there’s a really big problem.  The index has undergone a radical makeover.  It’s not the same index anymore.

Conceptually, it doesn’t make sense to expect a normalization in Ireland’s economic condition to catapult the ISEQ’s earnings back to the levels seen from 2004 to 2008, when the ticking time bomb of a highly-leveraged banking system was the engine of profit growth. Of the three banks that made up the majority of the index’s earnings at the time, two are no longer in the index, and the other is an unrecognizable version of its former self, having undergone a massively dilutive recapitalization.

The Shiller CAPE: The Dilution Distortion

It turns out that there is an even more significant illusion being produced here.  To illustrate the illusion, I’m going to present calculations of the Shiller CAPEs of individual companies in the Irish index.  Note that the data, the majority of which is taken from GuruFocus, may contain minor errors, specifically related to the capitalization and share count of the companies, given how complex the changes have been since the crisis. Regardless, the calculations are adequate to illustrate the underlying process at play.

Consider the Bank of Ireland, whose CAPE calculation is shown in the table below:


Notice the column “Shares (MM).”  As you can see, there’s been a huge explosion in the shares outstanding of the Bank of Ireland, obviously related to the massively dilutive recapitalization that the bank was forced to undergo in conjunction with the financial crisis.

Let’s think about how this dilution might impact the CAPE.  In an extreme dilution, a company’s share price will fall by orders of magnitude–appropriately.  In the case above, the price fell from over 900 to 15.  In the CAPE calculation, the appropriately-collapsed price will be compared with the company’s past earnings per share, earnings that were earned when the share count was orders of magnitude smaller than it currently is.  The dilution-depressed current price per share will thus get measured against an artificially inflated past earnings per share, a number that in no way reflects the company’s future earnings potential. Users of the metric will therefore walk away with a completely false picture of the company’s valuation.

It turns out that there is an additional illusion associated with the dilution.  One would expect the crisis that caused the dilution to have produced a period of negative earnings that will get averaged into the Shiller CAPE, negating at least a portion of the artificial earnings excess of the boom.  To be sure, in the case of the Bank of Ireland, those negative earnings did come through.  Crucially, however, they were “registered” during the same reporting period as the dilution.  The losses were therefore diluted over an artificially large number of shares, producing a relatively small per share loss (relative to the large per share gains that were enjoyed during the boom).

To make the point more clear, let’s get specific.  To arrive at a per-share basis, the $1B to $2.5B that the Bank of Ireland earned each year in the pre-crisis period is being divided by the pre-dilution number of shares, approximately 24 MM.  But then, after the crisis, the subsequent $4B loss is being divided by the post-dilution share count, a number ranging from 100 MM to 750MM.  The result is what you see above.  The Bank of Ireland appears to have earned $50 to $100 dollars per share per year during the boom times, and to then have lost only around $20 per share during the entirety of the bust.  When you average these per-share numbers together to compute the average earnings, you get a deceptively high average, and therefore a deceptively low Shiller CAPE.

Now, to eliminate this distortion, what I’ve done in the table is calculate the CAPE on an absolute basis in addition to on a per-share basis.  Instead of comparing the price per share to the average real earnings per share over the last 10 years, the “Absolute CAPE” compares the current market capitalization to the average real net income over the past 10 years, with both numbers unadjusted for share count.  On this absolute basis, the CAPE for the Bank of Ireland rises from a ridiculously cheap 0.40 to a seemingly expensive 20.62.

The same distortion emerges to an even greater degree in the case of Allied Irish Banks, whose CAPE calculation is shown in the table below:


Uncorrected for the dilutive distortion, Allied Irish shows a CAPE of 1.99.  But the absolute CAPE is actually steeply negative, indicating that, on an absolute basis, the bank lost more money in the crisis than it earned during the boom.  Is that surprising? It shouldn’t be–over the long-term, bubble-bust finance is not a good business.  You eventually get completely wiped out.

There are similar distortions associated with the way in which troubled company’s tend to exit the index.  Troubled companies often get delisted and removed from an index before all of their losses have been taken, allowing the index to escape from the losses scot-free, even though the corresponding gains were registered without hindrance.  Worse yet, with some types of indices, even when the troubled company does remain in the index to register its losses, the publishers of the index don’t count the losses in the index earnings, because the losses represent one-time, non-recurring events.

I would have included a CAPE analysis of Anglo-Irish Bank, but they represent a prime example of an exit distortion, having been nationalized in 2009.  From 2004 to 2008, they earned a profit typical of the other banks, on the order of around $1B per year.  But then, in 2010, well after they had been nationalized and removed from the ISEQ, they took a cool $15B impairment loss, a loss that, if registered in the index, would have more than wiped out any profit that they contributed during the boom.  How convenient.  The ISEQ is able to count Anglo-Irish’s highly artificial profits earned during the boom, but then when the bust comes along, and it’s time for Anglo-Irish to drop its turd, the bank is already long gone from the index.  Its turd gets dropped in a black hole, leaving the ISEQ’s earnings unaffected.

Now, let’s look at the CAPEs of some of the larger non-financial companies in the ISEQ. First, CRH:


As the table shows, there’s clearly some value in CRH.   But it’s nothing to write home about. Notice that the company enjoyed the same dilutive effect that the banks enjoyed, albeit to a much smaller degree.  The absolute CAPE is 3 points higher than the per share CAPE, owing to the fact that the share count has increased by almost 50% over the period.

CRH is easily the cheapest non-financial stock in the Irish stock market, and yet it’s CAPE isn’t even below 12.  We should therefore be extremely suspicious when we see the Irish stock market as a whole register a CAPE below 10.  Trivially, a country cannot have a CAPE lower than the CAPE of its cheapest current constituent.  If it does, then something has necessarily gone wrong in the analysis.

Here is the CAPE for Ryanair, Ireland’s premier airline, which makes up about 9% of the ISEQ:


Ryanair sports a CAPE of 36.45–hardly a case of deep value.  Notice that its per share CAPE is actually higher than its absolute CAPE.  The reason is that it’s been shrinking its shares, rather than growing them.

Here is the CAPE for Kerry Group, a food producer in the ISEQ:


Again, a very high CAPE, on par with the CAPE levels that you might see in a growing U.S. company.  The per share CAPE is roughly the same as the absolute CAPE because there’s little change in the share count.

U.S. Banks: Similar CAPE distortions?  

The U.S. banking sector is often cited as the cheapest sector of the U.S. equity market.  It may be the cheapest sector–I’m not going to argue that point.  But the CAPE should not be what leads us to this conclusion.  The CAPE is not a conceptually valid way of measuring value in a post-crisis environment where share counts have appreciably changed.

The same distortion that we saw in the CAPEs of the Bank of Ireland and Allied Irish is present in the CAPEs of America’s junky financial analogues.  Consider, for example, the CAPE of Bank of America ($BAC), calculated below:


As you can see, $BAC suffered significant dilution in the crisis aftermath, simultaneous with its post-crisis writedowns, creating a distortion in its per share CAPE.  The per share CAPE comes in at 8.14, when the absolute CAPE is 19.29.

The following table shows the CAPE calculation for another junky financial, Citigroup ($C):


Again, the same distortion is present, to an even greater degree, given the greater dilution. The per share CAPE is 4.77, whereas the Absolute CAPE is 18.25.

Energy Companies: A More Benign Distortion

It turns out that the Shiller CAPE also creates distortions in the valuation of energy companies.  The reason is that energy companies generate earnings off of a depreciating asset base.  The appropriate way to value them is not to look at their past earnings, generated on assets that are now used up, but to conduct a discounted cash flow analysis of the future earnings that they will generate on their current asset base, as that base depletes away.

Consider, as an example, the case of Total, the integrated French oil company.  The following table shows Total’s Shiller CAPE:


As you can see, Total trades at a very attractive CAPE relative to the market.  It also trades at an attractive ttm P/E ratio–and always has.  The reason that it trades at an attractive CAPE and ttm P/E ratio is that its past earnings are not directly relevant to its current value.  What is relative to its current value is the ratio of its price to the discounted sum of its future earnings, earnings that will be generated as its finite oil reserves are drilled out of the ground and sold.  How plentiful are those reserves?  What is their quality?  How expensive will drilling them out of the ground be?  From a valuation perspective, these are the questions that matter.

The intrinsic value of an asset is the discounted sum of its future cash flows.  If you have a company with recurring cash flows generated off of a surviveable asset base, then it makes sense to use trailing metrics like the CAPE, the ttm P/E ratio, and the ttm dividend yield to approximate the value.  But if you have an energy company with an asset base that depletes every time product is pumped out and sold, an asset base that is difficult and costly to replace through new discovery, then these metrics will not provide an accurate picture of the value.

Discounted at 10%, the net present value of Total’s proven oil and gas reserves is $47B. The company trades at a market capitalization of $139B, with an enterprise value, including net debt, that is even higher.  On those numbers, Total is hardly a case of deep value. To the contrary, it appears to be overvalued–by at least 200%.  Before we jump to that conclusion, however, let’s consider a few points:

  • 10% may be too large of a discount rate to apply to the assets in the present interest rate environment.  Of course, lowering the discount rate won’t fully alleviate the apparent overvaluation.  Even at a 0% discount rate, the net present value of Total’s oil and gas assets is only $105B (and that’s before the recent oil price drop).
  • Total is an integrated company, and generates profit from refining and marketing in addition to production.  The profits associated with its refining and marketing arms have to be included in the valuation analysis, just as its net debt has to be included.
  • Proven reserves are often only a conservative estimate of the quantity of oil and gas that an energy company has access to and will be able to produce and sell over time. Given that the company trades at a 50% premium to its undiscounted proven oil and gas reserves, the market probably expects Total’s unproven reserves to be significant, possibly even larger and more valuable than its proven reserves.

The point, however, is that an analysis of the cash flow that will be generated out of Total’s current oil and gas assets, and not an analysis of the cash flow that it generated last decade, off of assets that have long since been converted into carbon dioxide, is what will determine the price that oil and gas investors will be willing to pay to own the company, and the price that they should be willing to pay.  That’s why Total trades at depressed P/E and CAPE multiples.  P/E and CAPE multiples simply are not relevant considerations in the oil and gas valuation process.

Reasons to Be Skeptical of European Value

If you examine the indices of the country’s in Europe that are allegedly offering investors deep value, you will notice that these indices are heavily allocated to financials and to energy as sectors.  In cases such as Ireland where the indices are not heavily allocated to the financial and energy sectors, there’s little deep value to be found.

The following table shows the CAPEs of important European countries, borrowed from Star Capital’s fantastic interactive website, alongside the country allocations to the financial and energy sectors in the respective MSCI indices.


As we see, the countries have low CAPEs, but they also have lopsided allocations to the financial and energy sectors.  In fact, there’s an apparent pattern: the higher the allocation to the financial and energy sectors–especially the financial sector–the lower the CAPE.

The heavy exposure to the financial sector substantially increases the risks of distortion, particularly given the credit bubble and subsequent crisis that Europe experienced. Greece, with a whopping 56% allocation to financials, and an unrecognizably low CAPE of 3.5, is particularly suspect in that respect. Where is its ultra-low CAPE coming from?  My guess: not from healthy business selling at attractive prices, but from crashed-out zombie banks that are distorting the index.

Of all of the regions in the world, Europe offers what is clearly the worst fundamental backdrop for investment.  The continent is overregulated, with inflexible labor laws and a generally business-unfriendly political climate, at least in certain countries.  The continent’s household, corporate, and financial sectors are heavily-indebted.  The population is in clear demographic decline.  The different countries that make up the continent have different cultural and competitive dynamics, yet are all trapped in a single currency union.  The exchange rates between the countries are therefore unable to naturally adjust so as to bring payment balances into line.

As if these structural headwinds weren’t enough, the monetary authority in Europe is a joke.  It has no ability to do anything to stimulate the European economy except talk.  The northern bloc won’t allow it to do anything more.  How long have we been attending to these meetings, listening to Mario Draghi tell us about the things that he might one day do?  At every meeting, the date of eventual action is pushed off to the next meeting, or beyond.  Nothing ever happens.

Markets love monetary policy, but in truth, monetary policy has little to offer in a situation like this, where households and corporations are deleveraging, and where the population and the workforce are shrinking.  In addition to supply-side labor reforms, what Europe needs is aggressive fiscal policy.  Fiscal policy has the ability to directly and reliably increase aggregate demand.  If aggregate demand is strong, real investment will start making economic sense (it doesn’t make economic sense right now).  Real investment will therefore increase, creating new sources of employment and income, fueling further increases in aggregate demand, incentivizing additional real investment, and so on, in a virtuous cycle.  For such a cycle to reliably take hold in a world that faces the kinds of headwinds that Europe faces, there needs to be an aggressive commitment on the part of policymakers to take whatever fiscal actions are necessary to keep aggregate demand strong–to intentionally and unapologetically run the economy hot, even if this means dropping freshly printed euros from a helicopter in the sky.

On that front, Europe could not possibly be worse off.  The weaker countries that need aggressive fiscal stimulus have no ability to borrow in their own currencies.  To conduct fiscal expansion, they have to get the permission of a separate country, Germany, a country with an obsessive fear of inflation and government debt, that does not have to share in any of their pains.

We can celebrate the fact that Mario Draghi said something, and that markets around the world rallied, but we should not let superficial price action blind us to the fact that the project of the Eurozone is an unsustainable mess.  The union is going to have to eventually dissolve, or at least undergo a substantial makeover.  Such a change is sure to bring turmoil to European financial markets, whether it comes next year, 5 years from now, or 20 years from now.  European investors deserve to be appropriately compensated for the risk.

Are they being appropriately compensated?  It’s not clear.  From a Shiller CAPE standpoint, it looks like they are being compensated, but that’s likely to be a result of the high financial and energy sector exposures that European indices contain.

Interestingly, U.S. investors can find the “deep value” that allegedly exists in Europe right here at home, in their own backyards.  All they have to do is go to the sectors that dominate European indices–financials and energy.  If they want a low CAPE, they can buy low-quality U.S. banks that were forced to recapitalize in the credit crisis–$BAC and $C, for example–or large cap integrated oil companies that trade on the productivity of their underlying oil and gas assets, rather than on P/E ratios–$CVX and $XOM, for example. These companies sport Shiller CAPEs that are just as low as the deep value companies of Europe.  There’s hardly a difference, for example, between the Shiller CAPE of a $BAC and that of a Banco Santander ($SAN), or the Shiller CAPE of a $CVX and that of an Eni Spa ($E).  The numbers are essentially the same.

Solutions to the Problem

As a metric, the Shiller CAPE is still useful.  It just needs to be employed with caution in countries that are coming out of large credit booms and busts, particularly those that have heavy exposure to financials, or that have had heavy exposure to financials in the past.  I’m therefore going to conclude the piece with some proposals for how investors might be able to avoid, or at least work around, the CAPE distortions that these countries give rise to.

One way would be to get under the hood of the indices themselves–examining how they’ve changed over time, how much dilution has taken place, what specific crisis-related losses have and haven’t been counted in the earnings numbers, and so on–adding whatever adjustments may needed to allow for an accurate valuation analysis to take place.  Unfortunately, this would be a difficult task.  The data is hard to find, and would take a very long time to piece together.

Another approach would be to use indices that intentionally exclude financials, and possibly energy companies as well. Unfortunately, none of the major index publishers produce ex-financial or ex-energy indices–for Europe or for any country.  Investors would have to build them directly, which would again be very complicated and time-consuming.

A more practical approach would be to evaluate the countries using ttm valuation measures, as a sort of “second check” on the Shiller CAPE metric.  The ttm P/E ratio is often criticized for only providing a picture of the last twelve months.  But that’s actually an advantage in this context, as it eliminates “The Ship of Theseus” problem. When you look at the ttm P/E ratio for an index, you can be sure that the “E” that you are looking at in the denominator is associated with the same companies as the “P” that you are looking at in the numerator. As we saw in the case of Ireland, you cannot always be sure of this fact when you use the Shiller CAPE on an index.

One good valuation metric to use, backed up by significant academic research, is the ttm enterprise value to ebitda (EV/EBITDA) ratio.  The advantage of ttm EV/EBITDA is that it includes net debt, which should be part of any valuation analysis, and also that it eliminates many of the non-recurring non-cash charges that tend to distort earnings, particularly around recessions.  The disadvantage, of course, is that it doesn’t count depreciation, and therefore it causes companies that have high depreciation costs, such as energy companies, to look artificially cheap.

If strictly non-cyclical measures are preferred, two additional ttm metrics that can be used to “second check” the Shiller CAPE are the ttm price to sales (P/S) ratio and the ttm price to book (P/B) ratio.  Like the P/E and EV/EBITDA ratios, these metrics only look at the prior year, and therefore avoid the “Ship of Theseus” problem.  At the same time, they solve the problem of cyclicality, given that sales and book values do not significantly fluctuate across the business cycle.

The problem with P/S and P/B ratios is that they tend to be different for countries that have different sectoral compositions.  Naturally, countries with higher allocations to high margin and high ROE sectors will tend to exhibit higher P/S and P/B ratios than those dominated by low margin and low ROE sectors.  We don’t necessarily want to penalize them for that in the analysis.  Additionally, for the P/B ratio, not all countries writedown their assets using the same standards.  European companies, for example, did not take the “goodwill” writedowns that U.S. companies took during the crisis.  For that reason, their P/B ratios tend to be lower, as explained in this analysis from KPMG.

A clean way around this problem would be to normalize the P/S and P/B ratios of different country indices to reflect the different sectoral compositions that those country indices exhibit and to reflect an application of the same writedown accounting standards.  Then, an apples-to-apples comparison between countries would become possible. Unfortunately, such a project would be too difficult and too time-consuming to put into motion.

When we check Ireland’s CAPE against its ttm P/S and P/B ratios, we quickly notice that our prior suspicions were correct: Ireland is not a case of deep value.  The country trades at a P/B ratio of 2.3 and a P/S ratio of 1.4, both of which register as expensive in comparison with the rest of the globe.  To be clear, Ireland may still be an attractive long-term investment opportunity–it probably is, given its many strengths–but the reason has nothing to do with its apparent status as deep value.

Fortunately, when we check the CAPE of the more-distressed PIIGS countries–Portugal, Italy, Greece, and Spain–against their respective P/S and P/B ratios, the countries continue to register as cheap.  It’s probably true, then, that the countries represent deep value–specifically, deep value concentrated in the financial sector, and to a lesser extent, the energy sector.  With respect to Greece, however, the P/B and P/S ratios, at 1.0 and 0.5 respectively, are not as cheap as would be expected given the 3.5 CAPE, which is almost half that of the closest competitor. Something is likely wrong with that number.

A final solution would be to not discriminate at all on the basis of country borders. If we’re looking for international value, let’s look for international value, in whatever country it happens to be located.  By looking strictly at individual companies, we can eliminate the need for indices altogether, bypassing the “Ship of Theseus” problems they create.

On that theme, there are a number of well-run international ETFs that take valuation factors with solid historical track records and apply them in foreign markets to locate attractive individual company opportunities.  Examples include (1) Cambria’s $FYLD, an international version of the successful $SYLD, which invests in companies that have a high shareholder yield, (2) Invesco’s $IPKW, an international version of the successful $PKW, which invests in companies that are buying back significant quantities of their own shares, and (3) Valueshares’ $IVAL, a not-yet-launched international version of the recently launched $QVAL, which invests in companies that exhibit attractive ttm EV/EBITDA ratios and that pass various quality screens.

Posted in Uncategorized | Leave a comment

Not Everyone Sucks at Investing

Judging from the financial headlines, we live in a world where everyone sucks at investing.

Hedge funds?  Consistent underperformers: this year, last year, the year before that, the year before that, the year before that.  Every year, it seems.  Just google “hedge funds” and “underperform”, to see the flurry of giddy articles that pop up.


Individual Investors?  Again, consistent underperformers.  They get excited at the tops, they panic at the bottoms, they do everything exactly backwards to the maximum extent possible.  The published numbers here are quite ugly: according to Dalbar’s 2013 QAIB publication, the average individual equity fund investor has earned a pathetic 3.69% annualized return over the last 30 years, versus the S&P’s 11.11% (note: the QAIB report may contain distortions).

How is such consistent underperformance possible?  The answer, we are told, is behavioral. Investors, of both the professional and the layman stripe, tend to herd.  They prefer to do what everyone else is doing.  And so they end up buying when assets are in high demand, at the worst possible prices, and selling when assets are out of favor, again at the worst possible prices.

There’s an obvious problem with this narrative, which you’ve probably already noticed. For every party in a trade, there is a counterparty–for every buyer, a seller, for every seller, a buyer.  There must, then, be an outperforming counterparty to the underperforming average investor, and the undeperforming average hedge fund, and the underperforming average day trader, and the undeperforming average endowment, and whoever else underperforms on average.  Someone had to be smartly selling to those groups in 2000 and 2007, for example, when they were frantically trying to get in, and smartly buying from them in 2003 and 2009, when they were desperately trying to get out.  Who–what group–is that someone?  And why doesn’t the financial media ever celebrate its achievements?

I’m glad to be able to tell you that I am a member of that group.  Over the last 15 years, I have compounded my own capital at a 35.9% annual rate, profiting handsomely from the ill-timed and ill-advised decisions of “average” individual investors, mutual fund managers, and hedge fund managers alike.  And don’t be fooled; it’s not just me.  A lot of us do quite well, thankfully.

Of course, everything I just said is a bald-faced lie.  So don’t worry.  I’m not better at life than you.  But how did it make you feel to read about someone else’s spectacular performance?  Probably not very good.  That’s why the media prefers the “everyone sucks” headline.  It makes for fun, satisfying, ego-pleasing reading.

The truth is this.  Investors in aggregate are the market.  Before frictions (fees, transaction costs, etc.), they cannot underperform.  Nor can they outperform.  For they would be underperforming and outperforming themselves, and that is obviously impossible.  Now, if we arbitrarily divide the market into different categories of participants–individual investors, hedge funds, pension funds, corporations, and so on–then it would be possible for some categories to consistently underperform others (note that this would create tension for the efficient market hypothesis–pure negative alpha is, in fact, a type of alpha). But, necessarily, the other categories would be outperforming.

What category of investor, then, is consistently outperforming the market, against the consistent underperformance of hedge funds, individual investors, and other losers?  You will be hard pressed to find an answer.  An obvious candidate would be the corporate sector, which has, in recent years, taken large amounts of equity out of the market through share buybacks and acquisitions, effectively forcing the rest of the market to be net sellers.  The problem with citing corporations as the clever counterparties, however, is that corporate managers exhibit the same herding tendencies as the rest of the market.  According to Z.1 data, they too prefer to buy high and sell low, having bought heavily around the 2000 and 2007 peaks, and having sold at the 2003 and 2009 troughs.


Part of the problem here is that we arbitrarily treat the S&P 500 as “the market”, the benchmark for evaluating performance.  But the S&P 500 is not a reasonable benchmark to use, since investors in aggregate do not allocate the entirety of their portfolios to U.S. equities.  Indeed, investors in aggregate cannot allocate the entirety of their portfolios to U.S. equities–if they tried, prices would go to infinity.  The strategy of devoting an entire portfolio to U.S. equities, which may look brilliant right now given the recent performance, would necessarily become a bad idea (if it isn’t already a bad idea).

The appropriate benchmark for performance evaluation is the global asset market, which includes all global assets: stocks from all countries, bonds from all countries, real estate in all countries, and, importantly, cash from all countries (commercial paper, government bills, bank deposits, and so on).  Over the long-term, some groups will surely outperform this market.  If the efficient market hypothesis is true, we should expect it to be those groups that choose to accept the most risk in the choice of what they own.  If the efficient market hypothesis is not true, then we should expect it to also include those groups that possess skill, that manage to own the right assets at the right prices at the right times.

Similarly, some groups will surely underperform the global asset market, because those groups choose to take on less risk than the global asset portfolio contains (making it possible for other groups to take on more risk), because those groups lack skill (making it possible for other groups to demonstrate skill), or because those groups stupidly accept unnecessary frictions–management fees uncompensated by skill, overtrading with high commission costs across large bid-ask spreads, and so on (making it possible for financial middlemen to earn a living).  But the point is, with performance properly measured, it’s not possible for everyone to consistently underperform.  Not everyone sucks at investing.

Posted in Uncategorized | Leave a comment

Valuation from All Angles: S&P 500, Russell 2000, and the 10 GICS Sectors

(Much thanks to the must follow @ElliotTurn for valuable help and feedback in the development of these charts and tables)

In this piece, I’m going to present a series of charts and tables that seek to efficiently convey the state of U.S. equity valuations from all available vantage points–that is, “from all angles.”  Note that a convenient slideshow aggregating the tables and charts together is presented at the bottom.

S&P 500:

The following “ttm” chart shows trailing-twelve month (ttm) values and ratios from 1996 to 2014 (click on the chart to enlarge):


(Legend: The squares show the following metrics (1 to 20, left to right, top to bottom): (1) real price returns and real total returns (with dividends reinvested at market prices), (2) trailing-twelve month (ttm) dividend yields, (3) ttm price to earnings (P/E) ratios and fwd P/E ratios based on analyst estimates, (4) ttm enterprise value to earnings before interest, taxes, depreciation and amortization (EV/EBITDA) ratios, (5) ttm price to ebitda (P/EBITDA) ratios, (6) real ttm sales and book value growth, (7) real ttm dividend growth, (8) real ttm EPS growth, (9) real ttm EBITDA growth, (10) annualized inflation rates for the prior 6 years and long-term government bond yields, (11) ttm price to sales (P/S) ratios, (12) ttm dividend margins (ttm dividends as a % of sales), (13) ttm EPS margins (ttm EPS as a % of sales), (14) ttm EBITDA margins (ttm EBITDA as a a % of sales), (15) interest, taxes, depreciation and amortization (ITDA) as a % of EBITDA (which gives a picture of how much the earnings are being reduced by those expenses at any given time–very important), (16) price to book (P/B) ratios, (17) ttm dividend payout ratios (ttm dividends divided by ttm EPS), (18) ttm EPS return on equity (ROE) (ttm EPS divided by book value), (19) ttm EBITDA ROE (ttm EBITDA divided by book value), (20) real net debt (debt minus cash and liquid assets, i.e., the difference between enterprise value and price).  The dotted black line in each chart shows the metric’s average for the period.)

The following “Shiller” chart shows different types of Shillerized valuations from 1996 to 2014:


(Legend: The squares show the following metrics (1 to 15, left to right, top to bottom): (1) shiller P/E ratio (real price divided by the of average real ttm EPS seen over the prior 6 years (10 leads to too much information loss), (2) price to peak earnings (P/PkEPS) ratio (real price divided by the highest ttm real EPS earnings reading seen over the prior 6 years), (3) Shiller EV/EBITDA (using 6 years), (4) enterprise value to peak EBITDA (EV/PkEBITDA) ratio (using 6 years), (5) Shiller price to EBITDA ratio (using 6 years), (6) real shiller EPS (average of real ttm EPS over the prior 6 years), real peak EPS (highest ttm EPS seen over the prior 6 years), (7) real Shiller EBITDA (average of real ttm EBITDA over the prior 6 years), (8) real peak EBITDA (highest ttm EBITDA seen over the prior 6 years), (9) real ttm Sales and real ttm Book value, (10) – (14) margins and ROEs for all Shiller and peak metrics (Shiller EPS / sales, Shiller EPS / book value, Peak EPS / sales, Peak EPS/ book value, Shiller EBITDA / sales, Shiller EBITDA / book value, peak EBITDA / sales, peak EBITDA / book value, (15) asset turnover, i.e., sales / book value.)

The following table presents data from the above charts in numeric form.


(Legend: The upper left quadrant shows valuation metrics as of the close on 11/13/14 and the average for the period (along with the delta between the present value and the average). The upper right quadrant decomposes the returns into dividends, growth in fundamentals, and changes in valuation for three different fundamental bases: price to sales, Shiller P/E, and Shiller P/EBITDA.  Note that the “true ROE” of the corporate sector is the return that it would produce in a given period if valuation were held constant during that period.  Thus the true ROE equals the dividend return plus the return due to growth in the given fundamental (which will necessarily equal the growth in the price if the valuation relative to that fundamental stays constant).  The lower left quadrant shows margins and ROE as of the close on 11/13/14 and relative to the average for the period.  The lower right quadrant shows valuation metrics relative to the long-term government bond yield.)

I will now present the same charts and tables for the the Russell 2000 and the 10 GICS sectors–Consumer Discretionary, Consumer Staples, Energy, Financials, Healthcare, Industrials, Materials, Technology, in that order–in slideshows.

Slideshow: TTM Charts

Here are all of the “ttm” charts (ttm valuation ratios, growth, margins, ROEs, inflation, government bond yields, etc.) in a slideshow (going from upper left to lowe right: SPX, R2K, Discretionary, Staples, Energy, Financials, Healthcare, Industrials, Materials, Technology).  Click on any image to start the slideshow there:

Slideshow: Shiller Charts

Here are all of the “Shiller” charts (Shillerized data) in a slideshow (going from upper left to lowe right: SPX, R2K, Discretionary, Staples, Energy, Financials, Healthcare, Industrials, Materials, Technology).  Click any image to start the slideshow there:

Slideshow: Tables

Here are all of the tables (going from upper left to lowe right: SPX, R2K, Discretionary, Staples, Energy, Financials, Healthcare, Industrials, Materials, Technology).  Click on any image to start the slideshow there:

In a subsequent piece, I will present the same charts and tables for 17 different countries.  I will also present tables that rank the sectors and countries by the different valuation and growth factors.

Posted in Uncategorized | Leave a comment

How Often Does the Stock Market Correct?

‘Tis the season for corrections, and so we ask, how often do they occur historically?  To answer the question, we need to precisely define the the term “correction.”  If the stock market falls 20% in a straight line, most of us would interpret the move to be a single 20% correction.  But suppose that the stock market falls 10% in a straight line, then stabilizes or bounces, then falls another 10%. Would that be one correction, or two?  We need to specify.

Here, we will arbitrarily define “correction” as follows.  The market is in a correction of X% on a given day if it closes on that day at a level that is more than X% off of its closing 52 week high.  The question we will then ask is, how often is the market in a correction of X%–3%, 5%, 7%, 10%, 15%, 20% and so on?

To answer the question, we will use a total return index (daily, built from CRSP data back to January 3rd, 1928), rather than a simple price index.  The reason we will use a total return index is that in the past, companies paid out a much greater share of their earnings as dividends than they do in the present.  But dividends, when paid out, represent step reductions in corporate net worth, and therefore entail step reductions in price. Dividends thus make it more likely that the market will hit an X% correction target on price, all else equal.  By using a total return index rather than a price index, we eliminate this distortion.

We will analyze two different periods: a full period, from January 3rd, 1928 to August 28th, 2014 and a post-war period, from January 2nd, 1945 to August 28th, 2014.



7pctcorrxn 10pctcorrxn







Note that the term “correction”, as we’ve defined it, isn’t very helpful in depicting the larger moves, because those moves tend to happen over periods that exceed 52 weeks.  In cases where the moves do exceed periods of 52 weeks, they often don’t get captured in the definition, because the previous 52 week high moves lower as the market moves lower, preventing the market from separating from it (falling relative to it) by the X% number.

The following charts seek to provide a more useful depiction of the larger moves.  They show all of those times where a buy and hold investor was down, on a total return basis, by more than X% from any prior all-time high.








Posted in Uncategorized | Leave a comment

The State of Investment Around the World

In this piece, I’m going to share a few charts on the state of investment around the world. The data is taken from FRED, and shows the percentage change in trailing twelve month real gross fixed capital formation from 1Q 2000 levels.

US, Europe, Japan, 1Q 2000 to 2Q 2014:


Notice the recent downtick in Japan.  Is the downtick a short-term effect of the consumption tax increase, or a sign that the “Abenomics” boom is already petering out?

Germany, Italy, France, Eurozone, 1Q 2000 to 2Q 2014:


Notice the investment strength that Germany has seen since the crisis.  This strength stands in stark contrast to the investment decline seen in the other countries. Interestingly, the situation is a mirror image of the situation of the prior decade, wherein German investment remained subdued while investment in the other countries experienced a boom.

The subdued investment in Germany, coupled to the unproductive investment boom experienced by the rest of the Eurozone, created wage, price and competitiveness differentials between the countries that now stand at the heart of the Eurozone problem. German goods and services are simply too cheap relative to the goods and services of the rest of the Eurozone for the countries to remain in a currency union that precludes exchange rate adjustment.

The boom in employment that Germany, due to its significant competitive advantages, is experiencing as it attracts consumption and investment flows from the rest of the Eurozone is the system’s attempt to rebalance in the only way that it can.  The problem is that the rebalancing is extremely painful for the rest of the Eurozone countries, which are experiencing the opposite of what Germany is experiencing–depressed investment, high unemployment, and a tendency towards deflation.

The only politically viable mechanism for the system to restore balance is for Germany to “boom” more, to take on more inflation relative to the other countries, which will raise the relative prices of goods and services in Germany, and create conditions where consumption and investment flows begin to naturally move back in the other direction, towards the rest of the continent.  If Mario Draghi’s monetary experiments will have any hope of saving the Eurozone, the hope will rest on that mechanism: stimulating the German economy and raising German inflation in a way that helps restore relative price competitiveness in the other countries.

Spain and Greece, 1Q 2000 to 2Q 2014:


These countries had enormous residential investment booms in the last decade, five times as large as the Eurozone in aggregate.  The booms did not lead to appreciable increases in Spanish or Greek productivity, though they increased wages and prices, which is why Spain and Greece now have a competitive deficit relative to Germany–a deficit that cannot adjust via the exchange rate, because the countries are in a single currency union.

United States, United Kingdom, 1Q 2000 and 2Q 2014:


These economies tend to track each other, despite the distance between them. The UK is doing reasonably well right now, much more like the US than the rest of Europe.

United States, Brazil, India, 1Q 2000 to 2Q 2014:


From an investment perspective, the emerging market boom of the last decade was much larger than the US housing boom.  The challenge for the emerging markets going forward will be to digest the large credit expansion associated with that boom, some of which was surely unproductive.  Both countries have significant inflation problems, completely different from the problems faced by the developed world.

United States, Brazil, India, 2Q 2011 to 2Q 2014:


Since 2011, investment in the US has actually been stronger than in India and Brazil. At present, Brazil is showing clear signs of being in recession.

Some thoughts:

In terms of balanced growth, the US is the strongest economy in the world right now. On a net basis, fiscal and monetary policy in the US are not as accomodative as they should be. But they’re close enough.  The combination of tight fiscal policy and extremely loose monetary policy is where the problem lies–it’s a suboptimal combination, given the circumstances.

Europe desperately needs a large, deficit-financed fiscal stimulus program to shore up the savings-investment gap in its private sector.  Germany needs to lead the way on that front, aggressively stimulating its own economy so as to produce higher domestic inflation. Higher inflation in Germany will make the rest of the Eurozone more competitive and will provoke a sustainable reversal of consumption and investment flows back towards the rest of the continent.  Monetary stimulus may help some at the margin, but with long-term interest rates already at record lows, with banks and households eager to deleverage, and with asset prices already elevated, particularly in the residential sector, it’s unlikely to get the job done.

Japan is at a crossroads.  The consumption tax hikes were unnecessary and ill-advised. Japanese policy makers continue to show a lack of understanding of government debt. They don’t understand what the actual risks are.

To be clear, the risk that large government debt poses in Japan, and in any depressed economy, is not the risk of a bond market “revolt.”  Governments fund themselves at the short-end of the curve, a part of the curve that they themselves fully control, through their central banks.  A “revolt” is therefore impossible.  To the contrary, the risk of large government debt is that someday, well out into the future, the economy will be in a genuine boom again.  In such an environment, higher interest rates will be necessary to contain inflation.  The government will then have to choose between leaving rates low or zero–a choice that could spur intolerably high inflation, asset bubbles, stagnant real economic activity given the lack of price stability, capital flight, unwanted currency depreciation, a currency crisis, or all of the above–or raising rates and dramatically increasing the interest cost on the debt, given the high degree of leverage.  But if the interest cost on the debt rises substantially, the government will have to engage in aggressive austerity.  Such austerity is socially divisive and economically destabilizing. And so neither choice is attractive.

But there’s no reason for Japan to forego recovery altogether, forever, simply because the government debt will be large when it is finally achieved.  Japanese policymakers need to focus on getting to the destination first–a durable, self-sustaining expansion.  Once they get there, then they can worry about implementing measures to deal with the large debt, as the heavily-indebted US and UK governments successfully did in the aftermath of World War II.  If Japan has to enact substantial tax hikes and spending cuts at some point in the future, when the economy is overheating, then fine.  The worst that will happen is that the country will end up back in a recession, which is effectively what it is trying to get out of right now.  And if the country ends up experiencing a few years, or a decade, of double-digit inflation, because the tax hikes and spending cuts are too little, too late, then fine. Worse things have happened.  The inflation will eat away at the debt in real terms and pull the system towards a stable equilibrium.

Brazil and India have classic inflation problems.  To deal with these problems, they need to institute supply-side reforms alongside tighter monetary policy, so as to ensure that capital goes to its most efficient destinations.  Brazil needs to focus more on supply-side reforms, as its monetary policy is already reasonably tight.

Posted in Uncategorized | Leave a comment

Free Banking on a Bitcoin Standard–The State Prepares its Death Blow


In a previous piece, we examined the inner workings of a gold-based fractional-reserve free banking system–the monetary system that was roughly used in the United States for much of the 19th century and before.  The system works as follows.  Customers deposit gold–which is the system’s actual money, legal tender–at private banks, and receive paper banknotes in exchange for it. Customers can redeem the banknotes for the gold at any time.

In such a system, the market eventually comes to accept the banknotes of credible banks as payment in lieu of payment in gold.  The banknotes become “good as gold”, operationally equivalent to the base money that “backs” them.

Importantly, banks take advantage of the fact that, on a net basis, very few banknotes actually get redeemed for gold.  This convenient fact allows them to issue a quantity of banknotes that exceeds the quantity of customer gold that they have on hand to meet redemptions.  They issue the excess banknotes as loans to borrowers in exchange for interest.  In this way, they expand the functional money supply, and make it possible for the economy to grow in a non-deflationary manner, despite being on a hard monetary standard.

A fractional-reserve free banking system with gold as the base represents a coveted Libertarian ideal because it requires no government involvement, other than the simple enforcement of contracts.  There are no complicated and cumbersome regulatory rules to follow, no externally-imposed reserve requirements or capital adequacy ratios, no interest rate manipulations on behalf of economic, corporate, and political interests, and so on.  All that the system contains are individuals, banks, and naturally-occurring gold (legal tender, base money), with the individuals and banks free to use fractional-reserve lending to “multiply” the gold into whatever quantity of circulating paper money they wish.  Consistent with the Libertarian ideal, if they screw up, they pay the consequences.  There is no lender of last resort to come in and clean up the mess, only private entities entering into contractual agreements with each other and doing the due diligence necessary to ensure that those agreements work out.

In the modern era, it is inconceivable that any serious legislative body would choose to put an economy on a fractional-reserve free banking system.  Such systems are highly unstable, prone to bank runs and severe liquidity crises, particularly during periods of heightened risk-aversion.  That’s precisely why central banking was invented–because free-banking doesn’t work.

However, it is conveivable that the private sector, working on its own, could one day put the economy on a fractional-reserve free banking system.  The most likely way for it to accomplish this feat would be through the use of a cryptocurrency such as Bitcoin. In what follows, I’m going to explain why fractional-reserve free Bitcoin banking is a necessary condition for Bitcoin to become a dominant form of money, and how the government will easily stop its emergence and proliferation.

Economic expansion in a capitalist system is built on the following process.  Individuals borrow money and invest it.  The borrowing for investment does three things.  First, it adds capital to the economy and increases the economy’s real output capacity.  Second, it expands the operational money supply.  Third, it creates new streams of monetary income.  The new streams of monetary income are used to consume the new streams of real output that the investment has made possible.  The spending of the new income streams by those who receive them creates income for those that made the investments.  That income is used to finance the borrowing, with some left over as profit to justify the investment.  The economy is thus able to “grow”–engage in a larger total value of final transactions at constant-prices–without needing to increase its turnover of money, because it has more money in it, money that was created through the process of borrowing and investing.  The relevant economic aggregates–real output capacity, money supply, income–all grow together, proportionately, in a balanced, virtuous cycle.

Crucially, for Bitcoin to evolve into a dominant form money, it needs to be the dominant form of money in each stage of this process.  If workers are going to get paid in Bitcoins, the investment that creates their jobs will need to be financed in Bitcoins.  If consumers are going to go shopping with Bitcoins, the associated Bitcoin revenues that their shopping creates will need to be distributed as wages and dividends in Bitcoins, or reinvested as Bitcoins.  And so on and so forth.  Trivially, we can’t just pick one part of this process and say “that’s going to be the part that uses Bitcoin.”  If Bitcoin is going to reliably displace conventional money, the whole package will need to use it.

To be clear, it’s possible that Bitcoins could become popular for use as a form of payment intermediation–in the way, that, say, a gift card is used.  You put money on a gift card, give it to someone as a gift, and they spend it.  When they spend it, the merchant that receives it converts it out of literal “plastic” form and back into money, by electronically zeroing it out and taking final claim of the money that was used to buy it.  In a similar way, even though the corporate recipients of spending have no reason to want Bitcoins–they don’t owe debts to bondholders in Bitcoins, salaries to workers in Bitcoins, dividends to shareholders in Bitcoins, or taxes to the government in Bitcoins–it is conceivable that they might still accept Bitcoins, given that there is a market to convert Bitcoins into what they do want: actual money.

But with a gift card, the intermediation is conducted for a clear reason–to eliminate the coldness and impersonality associated with giving cash as a gift, even though cash is always the most economically efficient gift to give.  With respect to Bitcoin, what would be the purpose of the intermediation?  Why, other than for techy shits and giggles (“Hey, look guys, I just bought a pizza with Bitcoins, isn’t that cool!”), or to hide illicit activity, would anyone bother to hassle with it?  Just use conventional money–in this case, dollars.  The fees associated with using dollars are imperceptible, hardly a reason to waste time with an intermediary, especially an intermediary that is extremely volatile and speculative in nature.  And it’s not even clear that those who use Bitcoin for intermediation will manage to escape fees.

When we talk about the proliferation of Bitcoin as a replacement for conventional money, we’re talking about something much bigger than a situation where certain people switch into and out of it for purchasing convenience.  In such an environment, the underlying dollars are still the ultimate monetary “end”–the cryptocurrency acts merely as a way of temporarily “packaging” that end for preferred transport.  Instead, we’re talking about a situation where the Bitcoin becomes the actual money, the medium through which incomes are earned and spent.

Fundamentally, such an outcome requires a mechanism through which Bitcoins can be borrowed.  If Bitcoins can be borrowed, then it will be possible for the virtuous process of borrowing and investing to grow the supply of Bitcoins at a pace commensurate with the demand to use them in commerce, and commensurate with the growth in the supply of everything else that grows in an expanding economy.  But if Bitcoins cannot be borrowed, then their supply will only be able to grow at the pace of computer mining output–a pace that, by design, is very slow (and that has to be slow, in order to prevent the currency from being excessively produced and depreciating in value), and that, unlike conventional money, has no logical or causal connection to the growth that occurs in any other economic aggregate.

If, as output and incomes grow, the supply of Bitcoins is unable to efficiently increase to sustain the increased volume of commerce conducted, then the exchange value of Bitcoin will always be appreciating relative to real things.  The continual appreciation will bring with it extreme bi-directional volatility as individuals come to expect continual appreciation, and attempt to speculate on it in pursuit of an investment return. Consequently, “money illusion”, the conflation of money in the mind of the user with the things that it can buy, will not be able to form.  Without “money illusion”, no one is going to be inclined to measure the commercial world in Bitcoin terms, and therefore nobody is going to be comfortable storing wealth in the currency.

Granted, individuals will be comfortable speculating in Bitcoin, trying to aggresssively grow and expand wealth by investing in it, but not storing wealth in it, which is a different activity entirely.  The result will be a volatile, stressful-to-hold instrument that functions more like an internet stock–say, $FB or $TWTR, except without the earnings prospects–than like cash in the bank or under a mattress, which is how money is supposed to behave.  Internet stocks can certainly rise on reflexive hype, but without the prospect of eventual income (something that Bitcoins don’t offer), they don’t stay risen.

Ironically, the extreme bi-directional price volatility will give Bitcoins the opposite characteristic of gift cards and other temporary stores, which is why they won’t even be survivable as forms of payment intermediation.  Who wants to buy a gift card, or receive payment with a gift card, that randomly increases or decreases in value by huge amounts every minute, every hour, every day?  Again, it’s conceivable that someone might want to purchase such a thing for shits and giggles–as a fun gamble of sorts–but not for serious commercial purposes.

It’s important to recognize that the vast majority of people that are buying Bitcoins are not doing so because Bitcoins removes hardships associated with conventional money. In everyday life, the people that are buying Bitcoins still use their dollar bills, their credit cards, their online bill pay, and everything else, with no real gripe or complaint.  The reason they are buying Bitcoins is to speculate.   They want to get in on a futuristic technology that they think has the potential to massively “disrupt” the financial world, creating wealth for those that invest ahead of the pack.  That is the only thing that’s “in” the current sky-high price–that expectation, held in the minds of a large number of people.  The current price is not evidence that Bitcoin has successfully solved any economic or financial problem that actually needs to be solved–expense, intermediation, value storage, whatever.  Conventional money is working just fine.

Now, to return to free banking, the natural way for Bitcoin to latch onto an expansionary mechanism that would allow it to become a dominant economic currency, and to thereby displace conventional money, would be if a free banking system based on Bitcoins, similar to what existed in the U.S. in the 19th century, were to evolve.  On such a model, banks would “hold” Bitcoins for their customers, and issue electronic deposits redeemable for Bitcoins in exchange.  Because depository Bitcoin inflows would roughly match or exceed depository Bitcoin outflows for the system as a whole, it would be possible for the banks to issue more Bitcoin deposits than exist in actual Bitcoins on reserve.  The excess deposits would then be available for use in lending, which would increase the operational Bitcoin supply in a way that would allow for credit transactions–the lifeblood of economic growth–to shift to Bitcoin in lieu of conventional money, and for price stability and an associated money illusion in the Bitcoin space to emerge.

On such a system, investors and entrepreneurs would be able to take out Bitcoin loans to build homes, buildings, factories, technologies, and so forth.  The workers that build those entities would receive the Bitcoins as new income, and spend them.  The new spending would produce Bitcoin revenues, which would turn into recurring Bitcoin interest payments to the Bitcoin lenders, recurring Bitcoin wages to the workers, recurring Bitcoin dividends for the investors and entrepreneurs, and so on.  At that point, Bitcoin will have “arrived.”

If people were so inclined, one can envision this setup producing a situation where conventional government currencies become obsolete–where no one wants to use them anymore, or has a need to.  If that happens, the Fed’s central planning, and the central planning of other central banks, will have been fully bypassed–defeated once and for all. Central banks will no longer be able to force bailouts, excessive inflation, negative real interest rates, financial repression, and so on down the throats of unwilling market participants.  The system will be a true Libertarian utopia, based on limited government, private enterprise, and personal responsibility.

Fortunately (in my view), and unfortunately (in the view of Bitcoin aficionados), legislators and policymakers can easily prevent this outcome from happening.  All they have to do is put in place a regulation that imposes a 100% reserve requirement on entities that “bank” in Bitcoins, i.e., that hold Bitcoins for customers.  Then, expansion of the Bitcoin supply through lending will be impossible, and the currency will forever remain a constrained, volatile, illiquid, wholely speculative venture, something inappropriate and improperly fitted for serious, non-speculative, non-shits-and-giggles, non-scandalous economic activity.  Those that are seeking to borrow and invest–to take the first steps in the virtuous process of economic and monetary growth–will have no reason to want to mess around with the cryptocurrency.

Which brings us to the “death blow.”  It appears that legislators and policymakers are already a few steps ahead.  The New York State Department of Financial Services, for example, recently issued a set of proposed virtual currency regulations.  Among them:



That line right there, if accepted into regulation, would be enough to conclusively destroy any hope of a Bitcoin monetary takeover.  It effectively sets a 100% reserve requirement for Bitcoin banks, making it imposible for the supply of Bitcoin to expand in the ways that would be necessary for the cryptocurrency to displace conventional money.

The significance of this vulnerability should not be understated or underestimated.  It’s very easy for the government to stop the proliferation of Bitcoin, and ultimately send the cryptocurrency to the graveyard of investment fads.  The government doesn’t have to resort to draconian, unpalatable, freedom-killing measures that would try to stop consenting adults from innocently trading Bitcoins amongst each other. All the government has to do is impose a full-reserve banking requirement on any institution that purports to engage in Bitcoin banking.  Far from wading into controversy, it can impose such a requirement under the seemingly noble and politically palatable auspice of “protecting” Bitcoin users from risky bank behavior, even though the requirement will have the intended side effect of eventually extincting the cryptocurrency, or at least of squashing its hopes for greatness.

Posted in Uncategorized | Leave a comment

Supply and Demand: Untangling the Market’s Greatest Mystery

hwagnerOver the last ten years, the “collectibles” market has produced a fantastic return for investors.  According to the Knight Frank Luxury Investment Index, classic cars are up 550%, coins and stamps are up 350%, and fine wine and art are up 300%, with coveted items inside these spaces up by even greater amounts.

Why have collectibles performed so well, so much better than income earning assets like stocks and bonds?  Here’s a simple answer.  Over the last ten years, the supply of collectibles–especially those that are special in some way–has stayed constant.  In the same period, the demand for collectibles–driven by the quantity of idle financial superwealth available to chase after them–has exploded. When supply stays constant, and demand explodes, price goes up–sometimes, by crazy amounts.

For collectibles, “supply” is a crucial factor in determining price.  Often, the reason that a collectible becomes a collectible is that an anomaly makes it unusually rare, as was the case with the T206 Honus Wagner baseball card, shown above.  The card was designed and issued by the American Tobacco Company–one of the original 12 members of the Dow–as part of the T206 series for the 1909 season.  But Wagner refused to allow production of the card to proceed.  Some say that he refused because he was a non-smoker and did not want to participate in advertising the bad habit of smoking to children. Others say that he was simply greedy, and wanted to receive more money for the use of his image. Regardless, fewer than 200 issues of the card were manufactured, with even fewer released to the public, in comparison with hundreds of thousands of issuances of other cards in the series.  This anomaly turned an otherwise unremarkable card into a precious collectible that has continued to appreciate in value to this day.  The card most recently traded for $2,800,000, more than 100 times its price 30 years ago, even as baseball and baseball card collecting have waned in popularity.

A Similar Effect in Financial Assets?

Since 2009, the Federal Reserve and foreign central banks have purchased an enormous quantity of long-term U.S. Treasury bonds.  At the same time, the quantity of idle liquidity in the financial system available to chase after these bonds has greatly increased, with central banks issuing new cash for each bond they purchase, and also offering to loan new cash to banks at near zero interest on request.  Might this fact help explain why U.S. Treasuries–and bonds in general–have become so expensive, with yields so unexplainably low relative to the strengthening U.S. growth and inflation outlook? (h/t Anti Petajisto)


Similarly, over the last 30 years, the U.S. corporate sector has been aggressively reducing its outstanding shares, taking them off the market through buybacks and acquisitions. A continually growing supply of money and credit has thus been left to chase after a continually narrowing supply of equity.  Might this fact help explain why stocks have become so expensive relative to the past, so relentlessly inclined to grind higher, no matter the news?


In this piece, I’m not going to try to answer these questions.  Rather, I’m going to present a framework for answering them.  The purpose of the framework will be to help the reader answer them, or at least think about them more clearly.

Supply and Demand: Introducing A Simple Housing Model

We often think about the pricing of financial assets in terms of theoretical constructs–”fair value”, “risk premium”, “discounted cash flow”, “net present value”, and so on.  But the actual pricing of assets in financial markets is driven by forces that are much more basic: the forces of supply and demand.  At a given market price, what amount of an asset–how many shares or units–will people try to buy? What amount of the asset–how many shares or units–will people try to sell?  If we know the answer to these questions, then we know everything there is to know about where the price is headed.

To sharpen this insight, let’s consider a simple, closed housing market consisting of some enormously large number of individuals–say, 10 billion, enough to make the market reliably liquid. Each individual in this market can either live in a home, or in an apartment.  The rules for living in homes and apartments are as follows:

(1) To live in a home, you must own it.

(2) If you own a home, you must live in it.

(3) Only one person can live in a home at a time.

(4) A person can only own one home at a time.

(5) New homes cannot be built, because there is no new land to support building.

(6) Whoever does not live in a home must live in an apartment.

(Note: We introduce these constraints into the model not because they are realistic, but because they make it easier to extend the model to financial assets, which we will do later.)

Now, let’s suppose that the homes are perfectly identical to each other in all respects. Furthermore, let’s suppose that each of the homes has already been purchased, and already has an individual living inside it. Finally, let’s suppose that there is a sufficient supply of apartment space available for the total number of people that are not in homes to live in, and that the rent is stable and cheap.  But the apartments aren’t very nice.  The homes, in contrast, are quite nice–beautiful, spacious, comfortable. Unfortunately, there are only 1 billion homes in existence, enough for 10% of the individuals in the economy to live in.  The other 9 billion individuals in the economy, 90%, will have to accept living in apartments, whether they want to or not.

At any given moment, some number of people in homes that want to collect cash and downgrade into apartments are going to try to sell.  Conversely, some number of people in apartments that want to spend the money to upgrade are going to try to buy.  The way the market executes transactions between those that want to buy and those want to sell is as follows.  At the beginning of every second, a computer, remotely accessible to all members of the economy, displays a price.  Those that want to buy homes at the displayed price send buy orders into the computer.  Those that want to sell homes at the displayed price send sell orders into the computer.  Note that these orders is are orders to transact at the displayed price.  It’s not possible to submit orders to transact at other prices.  At the end of the second, the computer takes the buy orders and sell orders and randomly matches them together, organizing transactions between the parties.


Now, here’s how the price changes.  If the number of buy orders submitted in a given second equals the number of sell orders, or if there are no orders, then the price that the computer will display for transaction in the next second will be the same as in the previous second.  If the number of buy orders submitted in a given second is greater than the number of sell orders, such that not all buy orders get executed, then the computer will increase the price displayed for transaction in the next second by some calculated amount, an amount that will depend on how many more buy orders there were than sell orders.  If the number of buy orders submitted in a given second is less than the number of sell orders, then the same process happens in the opposite direction.


The purpose of this model is to provide a useful approximation of the price dynamics of actual markets.  The key difference between the model and a real market is that the model constrains buyers and sellers such that they can only offer to buy or sell at the displayed price, with the displayed price changing externally based on whether an excess of buyers or sellers emerges.  The reason we insert this constraint is to make the market’s path to equilibrium easy to conceptually follow–the path to equilibrium proceeds in a step by step manner, with the market trying out each price, and moving higher or lower based on which flow is greater at that price: buying flow or selling flow.  But the constraint doesn’t change the eventual outcome.  The price dynamics and the final equilibrium price end up being similar to what they would be in a real market where investors can accelerate the market’s path to equilibrium by freely shifting bids and asks.

Unpacking the Model: The Price Equation

The question we want to ask is, at what price–or general price range–will our housing market eventually settle at?  And if prices are never going to settle in a range, if they are going to continually change by significant, unpredictable amounts, what factors will set the direction and magnitude of the specific changes?

To answer this question, we begin by observing that the displayed price will change until a condition emerges in which the average number of buy orders inserted per unit time at the displayed price equals–or roughly equals–the average number of sell orders inserted per unit time at the displayed price.  Can you see why?  By the rules of the computer, if they are not equal, the price will change, with the magnitude of the change determine by the degree of unequalness.  So,

(1) Buy_Orders(Price) = Sell_Orders(Price)

“Price” is in parentheses here to indicate that the average number of buy orders that arrive in the market per unit time and the average number of sell orders that arrive in the market per unit time are functions of the price.  When the price changes, the average number of buy orders and sell orders changes, reflecting the fact that buyers and sellers are sensitive to the price they pay.  They care about it–a lot.

Now, we can separate Buy_Orders(Price), the average number of buy orders that occurs at a given price in a given period of time, into a supply term and a probability term.

Let Supply_Buyers be the supply term.  This term represents the number of potential buyers, which equals the number of individuals living in apartments–per our assumptions, 9 billion.

Let Probability_Buy(Price) be the probability term.  This term represents the average probability or likelihood that a generic potential buyer–any unspecified individual living in an apartment–will submit a buy order into the market in a given unit of time at the given price.

Combining the supply and probability terms, we get,

(2) Buy_Orders(Price) = Supply_Buyers * Probability_Buy(Price)

What (2) is saying is that the average number of buy orders that occurs per unit time at a given price equals the supply of potential buyers times the probability that a generic potential buyer will submit a buy order per unit time, given the price.  Makes sense?

Now, we can separate Sell_Orders(Price) in the same way, into a supply term and a probability term.  Let Supply_Homes be the supply term–per our assumptions, 1 billion.  Let Probability_Sell(Price) be the probability term, with both terms defined analogously to the above.  Combining the supply and probability terms, we get,

(3) Sell_Orders(Price) = Supply_Homes * Probability_Sell(Price)

(3) is saying the same thing as (2), except for sellers rather than buyers.  Combining (1), (2), and (3), we get a simple and elegant equation for price:

(4) Supply_Buyers * Probability_Buy(Price) = Supply_Sellers * Probability_Sell(Price)

The left side of the equation is the flow of attempted buying.  The right side of the equation is the flow of attempted selling.  The price that brings the two sides of the equation into balance is the equilibrium price, the price that the market will continually move towards. The market may not hit the price exactly, or be able to remain perfectly stable on it, but if the buyers are appropriately price sensitive, it will get very close, hovering and oscillating in a tight range.

The Buy-Sell Probability Function

Now, we know how many potential buyers–how many apartment dwellers–the market has: 9 billion.  We also know how many potential sellers–how many homes and homeowners–the market has: 1 billion.  9 billion is nine times 1 billion.  It would seem, then, that the market will face a permanent imbalance–too many buyers, too few sellers. But we’ve forgotten about the price.  As the price of a home rises, the portion of the 9 billion potential buyers that will be willing to pay to switch to a home will fall.  These individuals do not have infinite pocket books, nor do they have infinite supplies of credit from which to borrow.  Importantly, paying a high price for a home means that they will have to cut back on other expenditures–the degree to which they will have to cut back will rise as the price rises, making them less likely to want to buy at higher prices.

Similarly, as the price rises, the portion of the 1 billion homeowners that will be eager to sell and downsize into apartments will rise.  In selling their homes, they will be able to use the money to purchase other wanted things–the higher the price at which they sell, the more they will be able to purchase.

This dynamic is what the buy-sell probability functions, Probability_Buy(Price) and Probability_Sell(Price), are trying to model.  Crucially, they change with the price, increasing or decreasing to reflect the increasingly or decreasingly attractive proposition that buying and selling becomes as the price changes.  By changing with price, the terms make it possible for the two sides of the equation, the flow of attempted buying and selling, to come into balance.

Now, what do these functions look like, mathematically?  The answer will depend on a myriad of factors, to include the lifestyle preferences, financial circumstances, learned norms, past experiences, and behavioral propensities of the buyers and sellers.  There is some price range in which they will consider buying a home to be worthwhile and economically justifiable–this range will depend not only on their lifestyle preferences and financial circumstances, but also, crucially, on (1) the prices they are anchored to, i.e., that they are used to seeing, i.e., that they’ve been trained to think of as normal, reasonable, versus unfair or abusive, and (2) on what their prevailing levels of confidence, courage, risk appetite, impulsiveness, and so on happen to be.  Buying a home is a big deal.

For buyers, let’s suppose that this price range begins at $0 and ends at $500,000.  At $0, the average probability that a generic potential buyer–any individual living in an apartment–will submit a buy order in a given one year time frame is 100%, meaning that every individual in an apartment will submit one buy order, on average, per year, if that price is being offered (to change the number from per year to per second, just divide by the number of seconds in a year).  As the price rises from $0 to $500,000, the average probability falls to 0%, meaning that no one in the population will submit a buy order at $500,000, ever.

In “y = mx + b” form, we have,

(5) Probability_Buy(Price) = 100% – Price * (100%/$500,000)

The function is graphed below in green:


Notice that the function is negatively-sloping.  It moves downward from left to right.

For sellers, let’s suppose that the price range begins at $1,000,000 and ends at $400,000. At $1,000,000, the average probability that a generic potential seller–any individual living in a home–will submit a sell order in a given one year time frame is 100%.  As the price falls to $400,000, the average probability falls to 0%.

In “y = mx + b” form,

(6) Probability_Sell(Price) = Price * (100%/$600,000) – 66.6667%

The function is shown below in red:


Notice that the function is positively-sloping.  It moves upward from left to right.

Knowing these buy-sell probability functions, and knowing the number of individuals in apartments and the number of individuals in homes (the supplies that the probabilities will be acting on, 9 billion and 1 billion, respectively), we can plug equation (5) and equation (6) into equation (4) to calculate the equilibrium price.  In this case, the price calculates out to roughly $491,525 for a home.  The average probability of buying per individual per unit time will be low enough, and the average probability of selling per individual per unit time high enough, to render the average flow of attempted buying equal to the average flow of attempted selling, as required, even as the supply of potential buyers remains 9 times the supply of potential sellers.

Notably, the turnover, the volume of buying and selling, is going to be very low, because the buy-sell probability functions overlap at very low probabilities.  The buyers and the sellers are having to be stretched right up to the edge of their price limits in order to transact, with the buyers having to pay what they consider to be a very high price to transact, and the sellers having to accept what they consider to be a very low price to transact.

Now, keeping these buy-sell functions the same, let’s massively shrink the supply of potential buyers, to see what happens to the equilibrium price.  Suppose that instead of having 9 billion individuals in the economy living in apartments, suppose that we only have 1 million individuals living in apartments–1 million potential buyers of homes, none of whom are willing to pay more than $500,000.  As before, we’ll assume that there are 1 billion homes that can potentially be sold. What will happen to the price?  The answer: it will fall from roughly $491,525 to roughly $400,119.

Notice that the price won’t fall by very much–it will fall by only roughly $90,000–even though we’re dramatically shrinking the supply of potential buyers, by a factor of 9,000. The reason that the price isn’t going to fall by very much is that the sellers are sticky–they don’t budge.  Per their buy-sell probability functions, they simply aren’t willing to sell properties at prices below $400,000, and so if there aren’t very many people to bid at prices above $400,000, because the supply of buyers has been dramatically shrunk, then the volume will simply fall off.  In the former case, with the supply of potential buyers at 9 billion, 155 million homes get sold, on average, in a one year period.  In the latter case, with the supply of potential buyers at only 1 million, 200,000 homes get sold, on average, in a one year period.

Behavioral Factors: Anchoring and Disposition Effect

Recall that for buyers, the buy-sell probability function slopes negatively–i.e., falls downward–with price.  For sellers, the function slopes positively–i.e., rises upward–with price.  The reason the function slopes negatively for buyers is that price is a cost, a sacrifice, to them.  The lower or higher the price, the higher or lower that cost, that sacrifice.  Additionally, there is a limit to the cost the buyer can pay–he only has so much money, so much access to credit. The reason the function slopes positively for sellers is that price is a benefit, a gain, to them.  The lower or higher the price, the lower or higher that benefit, that gain.  Additionally, there is a limit to the price that the seller can accept without pain, particularly if he has debts to pay against the assets that he is trying to sell.

In addition to these fundamental considerations, there are also behavioral forces that make the functions negatively-sloping and positively-sloping for buyers and sellers respectively.  Of these forces, the two most important are anchoring and disposition effect.

Over time, buyers and sellers become anchored to the price ranges that they are used to seeing.  As the price move out of these ranges, they become more averse, more likely to interpret the price as an unusually good deal that should be immediately taken advantage of or as an unfair rip-off that should be refused and avoided.

Anchoring is often seen as something bad, a “mental error” of sorts, but it is actually a crucially important feature of human psychology.  Without it, price stability in markets would be virtually impossible.  Imagine if every individual entering a market had to use “theory” to determine what an “appropriate” price for a good or service was.  Every individual would then end up with a totally different conception of “appropriateness”, a conception that would shift wildly with each new tenuous calculation.  Prices would end up all over the place.  Worse yet, individuals would not be able to quickly and efficiently transact.  Enormous time resources would have to be spent in each individual transaction, enough time to do all the necessary calculations.  This time would be spent for nothing, completely wasted, as the calculation results would not be stable or repeatable.  From an evolutionary perspective, the organism would be placed at a significant disadvantage.

In practice, individuals need a quick, efficient, consistent heuristic to determine what is an “appropriate” price and what is not.  Anchoring provides that heuristic.  Individuals naturally consider the price ranges that they are accustomed to seeing and transacting at as “appropriate,” and they instinctively measure attractiveness and unattractiveness against those ranges.  When prices depart from the ranges, they feel the change and alter their behaviors accordingly–either to exploit bargains or to avoid rip-offs.

Disposition effect is also important to price stability.  Individuals tend to resist selling for prices that are less than the prices for which they bought, and tend to be averse to paying higher prices than the prices could have paid in the recent past.  This tendency causes price to be sticky, discinlined to move away from where they have been, as we should want them to be if we want markets to hold together, and not become chaotic.

Housing markets represent an instance where these two phenomena–anchoring and disposition effect–are particularly powerful, especially for sellers.  The phenomena is part of what makes housing such a stable asset class relative to other asset classes.


Homeowners absolutely do not like to sell their homes for prices that are lower than the prices that they paid, or that are lower than the prices that they are accustomed to thinking their homes are worth.  If a situation emerges in which buyers are unwilling to buy at the prices that homeowners paid, or the prices that homeowners are anchored to, the homeowners will try to find a way to avoid selling.  They will choose to stay in the home, even if they would prefer to move elsewhere.  If they need to move–for example, to take a new job–they will simply rent the home out; anything to avoid selling the home, taking a loss, and giving an unfair bargain to someone else.  Consequently, market conditions in which housing supply greatly exceeds housing demand tend to clear not through a fall in price, but through a drying up of volume, as we saw in the example above.

This effect was on fully display in the last recession.  Existing home sales topped out in 2005, but prices didn’t actually start falling in earnest until the recession hit in late 2007 and early 2008.  Prior to the recession, the homes were held tightly in the hands of homeowners.  As long as they could afford to stay in their homes, they weren’t going to sell at a loss.  But when the recession hit, they started losing their jobs, and therefore their ability to make their mortgage payments.  The result was a spike in foreclosures that put the homes into the hands of banks, mechanistic sellers that were not anchored to a price range and that were not averse to selling at prices that would have represented losses for the prior owners.  The homes were thus dumped onto the market at bargain values to whoever was willing to buy them.


When Is Supply Important to Price? 

Returning to the previous example, what would be the market outcome if buyers and sellers were completely insensitive to price, such that their buy-sell probability functions did not slope with price?  Put differently, what would be the market outcome if the average probability that a potential buyer or seller would buy or sell in a given unit of time–a given year–stayed constant under all scenarios–always equal to, say, 10%, regardless of the price?

The answer is that supply imbalances would cause enormous fluctuations in price.  Theoretically, any excess in the number of potential buyers over the number of potential sellers would permanently push the price upward, all the way to infinity, and any excess in the number of potential sellers relative to the number of potential buyers would permanently pull the price downward, all the way to zero.

In concrete terms, if there are 1,001 eager buyers that submit buy orders per unit time, and 1,000 eager sellers that submit sell orders, and if the buyers are completely indifferent to price, then there will always be one buyer left out of the mix.  Because that buyer is indifferent to price, he will not hesitate to raise his bid, so as to ensure that he isn’t left out of a transaction.  But whoever he displaces in the bidding will also be indifferent to price, and therefore will not hesitate to do the same, raise the bid again–and so on.  Participants will continue to raise their bids ad infinitum, continually fighting to avoid being the unlucky person that gets left out.

The only way for the process to end is for 1 of the buyers in the group to conclude, “OK, enough, the price is just too high, I’m not interested.”  That is price sensitivity.  Without it, a stable equilibrium amid a disparate supply of potential buyers and sellers cannot be achieved.

We now have the ability to answer an important question at the heart of this piece: when is “supply” most important to price, most impactful?  The answer is, when price sensitivity is low.  If the probability of buying doesn’t fall quickly in response to an increase in price, and if the probability of selling doesn’t fall quickly in response to a decrease in price, then even a small change in the supply of potential buyers or sellers will be able to create a large change in the price outcome.  In contrast, if the price sensitivity is high, if the probability of buying falls quickly in response to price increases, and the probability of selling falls quickly in response to price reductions, then the price will be able to remain steady, even in the presence of large supply excursions.  Intuitively, the reason the price will be able to remain steady is that the potential buyers and sellers will be holding their grounds–they won’t be budging off of their desired price ranges simply to make transactions happen.

Low price sensitivity is part of the reason why small speculative stocks with ambiguous but potentially exciting futures–low-float stocks with large potentials that are difficult to confidently value and that exhibit significant price reflexivity–tend to be highly volatile.  If there is a net excess or shortage of eager buyers in these stocks relative to eager sellers, the price will end up changing.  But the change will not correct the excess or shortage. Therefore the change will not stop.  It will keep going, and going, and going, and going.

To use a relevant recent example, if there is a shortage in the supply of $LOCO shares being offered in an IPO relative to the amount of $LOCO that investors want to allocate into, then the price is going to increase.  For the market in $LOCO to remain stable, this price increase will need to depress the demand, reduce the amount of $LOCO that investors want to allocate into.  If the price increase fails to depress the demand, or worse, if it does the opposite, if it increasess the demand–for example, by drawing additional attention to the name and increasing investor optimism about the company, given the rising price–then the price is going to get pushed higher and higher and higher.

At some point, something will have to reverse the process, as the price can’t go to infinity. In the case of $LOCO, more and more people might start to ask themselves, have things gone to far?  Is this stock a bubble that is about to burst?  An excess of sellers over buyers will then emerge, and the same process will unfold in the other direction.  When the price falls, the fall will not sufficiently clear the excess demand to sell, and may even increase it, by fueling anxiety, skepticism and fear on the part of the remaining holders.  And so the price will keep falling, and falling, and falling.

Now, if we shift from $LOCO IPO to a market where price sensitivity is strong, this dynamic doesn’t take hold.  To illustrate, suppose that the treasury were to issue a massive, gargantuan quantity of three month t-bills.  The same instability would not emerge.  The reason is that there is a strong inverse relationship between the price of three month t-bills and the demand to own them, a relationship held in place by the possibility of direct arbitrage in the banking system.  Recall that a three month t-bill offers a return that is fully-determined and free of credit risk.  It also carries no interest rate risk beyond a period of three months (the money will have been returned by then).  Thus, as long as the Fed holds overnight interest rates steady over the next three months, as the current Fed has effectively promised to do, banks will be able to borrow funds and purchase three month t-bills, capturing any excess return above the overnight rate that the bills happens to be offering, without taking on any risk.  And so any fall in the price of a three month treasury bill, and any rise in the yield, will represent free money to banks.  That free money will attract massive buying interest, more than enough to quench whatever increased selling flow might arise out of a large increase in the outstanding supply. Ultimately, when it comes to short-term treasuries, supply doesn’t matter much to price.

Extending the Model to Financial Assets: Equity and Credit

To extend the housing model to financial assets, we begin by noting that units of financial “wealth”–that is, units of the market value of portfolios, in this case measured in dollars–are analogous to “individuals” in the housing model.  Just as individuals could either live in homes or apartments–and had to choose one or the other–units of financial “wealth” can either be held in the form of equity (stocks), credit (bonds), or money (cash).  Just as every home had to have an owner and every apartment a tenant living inside it, every outstanding unit of equity, credit, and money in existence has to have a holder, has to be a part of someone’s portfolio, with a portion of the wealth contained in that portfolio stored inside it.

Now, to make the model fully analogous, we need to reduce the degrees of freedom from three (stocks, bonds, cash) to two (stocks, cash). So we’re going to treat bonds and cash as the same thing, referring to both simply as “cash.”  Then, investors will have to choose to hold financial “wealth” either in the form of “stocks”, or in the form of “cash”, just as “individuals” had to choose to live either in “homes”, or in “apartments.”

Let’s assume, then, that our stock market consists of some amount of cash–some number of individual dollars–and some amount of stock, some number of shares with a total dollar value determined by the price.  Let’s also assume that the same computer is there to take buy and sell orders–orders to exchange cash for stock or stock for cash respectively. The computer processes orders and moves the price towards equilibrium in the same way as before, by displaying a price–an exchange rate between stock and cash–then taking orders, then raising or lowering the price in the next moment based on where the excess lies.

The derivation of the price equation ends up being the same as in the housing model, and gives the following result.

(7) Supply_Cash * Probability_Buy(Price) = Supply_Stock(Price) * Probability_Sell(Price)

Here, Supply_Cash is the total dollar amount of cash in the system. Probability_Buy(Price) is the average probability, per dollar unit of cash in the system, per unit of time, that the unit of cash will be sent into the market to be exchanged for stock at the given price.  Supply_Stock is the the total market value of stock in existence. Probability_Sell(Price) is the average probability, per dollar unit of value of stock in the system, that the unit will be sent into the market to be exchanged for cash at the given price.

Now, where this model differs from the previous model is that Supply_Stock, the total market value of stock in existence, which is the total amount of stock available for investors to allocate their wealth into, is a function of Price.  It equals the number of number of shares times the price per share.

(8) Supply_Stock(Price) = Number_Shares * Price

Unlike in the housing model, the supply of stock in the stock market expands or contracts as the price rises and falls.  This ability to expand and contract helps to quell excesses that emerge in the amount of buying and selling that is attempted.  If investors, in aggregate, want to allocate a larger portion of their wealth into stocks than is available in the current supply, the price of stocks will obviously rise.  But the rising price will cause the supply of stocks–the shares times the price–to also rise, helping, at least in a small way, to relieve the pressure.  The same is true in the other direction.

Combing (7) and (8), we end up with a final form for the equation,

(9) Supply_Cash * Probability_Buy(Price) = Number_Shares * Price * Probability_Sell(Price)

Note that we’re using this equation to model stock prices, but we could just as easily use the equation to model the price of any asset, provided that simplifying assumptions are made.

A more accurate form of the equation would include a set of terms to model the possibility of margin buying and short selling.  These terms are shown in green,

(10) Supply_Cash * Probability_Buy(Price) + Supply_Borrowable_Cash * Probability_Borrow_To_Buy(Price)Number_Shares * Price * Probability_Sell(Price) + Number_Borrowable_Shares * Price * Probability_Borrow_To_Sell(Price)

But the introduction of these terms makes the equation unnecessarily complicated.  The extra terms are not needed to illustrate the underlying concepts, which is all that we’re trying to do.

A Growing Cash Supply Chases A Narrowing Stock Supply: What Happens?

It is commonly believed that the stock market–the aggregate universe of common stocks–rises over time because earnings rise over time.  Investors are sensitive to value. They estimate the future earnings of stocks, and decide on a fair multiple to pay for those earnings. When the stock market is priced below that multiple, they buy.  When the stock market is priced above that multiple, they sell.  In this way, they keep the price of the stock market in a range–a range that rises with earnings over time.

In a set of pieces from last year (#1, #2), I proposed a competing explanation.  On this explanation, the stock market rises over time because we operate in an inflationary financial system, a system in which the quantity of money and credit are always growing. Given its aversion to dilution, the corporate sector does not issue enough new shares to keep up with this growth.  Consequently, a rising quantity of money and credit is left to chase after a limited quantity of shares, pushing the prices of shares up through a supply effect.  Conveniently, as prices rise, the supply of stock rises, bringing the supply back into par with the supply of money and credit.

The truth, of course, is that both of these factors play a role in driving the stock market higher.  Which factor dominates depends on the degree of price sensitivity–or, in this case, the degree of value sensitivity–of the buyers and sellers.  In a world where buyers and sellers are highly sensitive to the price-earnings ratio, the supply effect will not exert a signficant effect on prices. Prices will track with earnings and earnings alone.  In a world where buyers and sellers are not highly sensitive to the price-earnings ratio, or to other price-based measurements of value, the supply effect will become more significant and more powerful.

We can illustrate this phenomenon by running the model computationally, with random offsets and deviations inserted to help simulate what happens in a real market.  Assume, that there are 1,000,000 shares of stock in the market, and $2B dollars of cash.  Assume, further, that each share of stock earns $100 per year in profit.  Finally, assume that the buy-sell probability functions for buyers and sellers are symmetric cumulative distribution functions (CDF) of Gaussian distributions with very small standard deviations.  These functions take not only price as input, but also earnings.  They compute the PE ratio at a given price and output a probability of buying or selling based on it.

The functions look like this:


We’ve centered the functions around a PE ratio of 15, which we’ll assume is the “normal” PE, the PE that market participants are trained and accustomed to view as “fair.”  Per the above construction of the function, at a PE 15, there is a 50% chance per day that a given dollar in the system will be submitted to the market by a buyer to purchase stock, and a 50% chance per day that a given dollar’s worth of stock in the system will be submitted to the market by a seller to purchase cash (what selling is, inversely).  As the PE rises above 15, the buying probability falls sharply, and the selling probability rises sharpy.  As the PE falls below 15, the buying probability rises sharply, and the selling probability falls sharply. Evidently, the buyers and sellers are extremely price and valuation sensitive.  15 plus or minus a point or two is the range of PE they are willing to tolerate; whenever that range is breached in the unattractive direction, they quickly step away.

Now, if we wanted to make the function more accurate and realistic, we would make it a function not only of price and earnings, but also of interest rates, demographics, growth outlook, culture, past experience, and so on–all of the “variables” that conceivably influence the valuations at which valuation-sensitive buyers and sellers are likely to buy and sell.  We’re ignoring these factors to keep the problem simple.

In the first instance, let’s assume that the supply of cash stays constant and the earnings stay constant.  Starting with a price of 2,000 for the index, holding the number of shares constant, and iterating through to an equilibrium, we get a chart that shows the trajectory of price over time, from now, the year 2014, to the year 2028.


The result is as expected.  If the buyers are highly value sensitive, and if the earnings aren’t growing, then the price should settle tightly on the price range that corresponds to a “normal” PE ratio–in this case, a range around 1500, 15 times earnings, which is what we see.

Now, let’s run the simulation on the assumption that the supply of cash stays constant and the earnings grow at 10% per year.


The result is again as expected.  The index price, the blue line, initially falls from 2000 to 1500 to get from a PE ratio of 20 to the normal PE ratio of 15.  It then proceeds to grow by 10% per year, commensurately with the earnings.  The cash supply stays constant, but this doesn’t appreciably hold back the price growth, because the buyers are value sensitive. They are going to push the price up to ensure that the PE ratio stays around 15, no matter the supply.

If you look closely, you will notice that the green line, the PE ratio, drifts slightly below 15 as time passes.  This drift is driven by the stunted supply effect.  The quantity of cash is not growing, which holds back the price growth by a miniscule amount relative to what it would be on the assumption of a perfectly constant 15 PE ratio.  The supply effect in the scenario is tiny, but it’s not exactly zero.

Now, let’s run the simulation on the assumption that the supply of cash rises at 10%, but the earnings stay constant.


The result is again as expected.  The index price stays constant, on par with the earnings, which are not growing.  The cash supply explodes, but this doesn’t exert an appreciable effect on the price, because the buyers are extremely value sensitive.

If you again look closely, you will notice that the green line, the PE ratio, drifts slightly above 15 as time passes.  This drift is again driven by the stunted supply effect.  The quantity of cash is growing rapidly, and this pushes up the price growth by a miniscule amount relative to what it would be on the assumption of a perfectly constant 15 PE ratio.

Now, let’s introduce a buy-sell probability function that is minimally sensitive to valuation, and see how the system responds to supply changes.  Instead of using CDFs of Gaussian distributions with very small standard deviations, we will now use CDFs of Gaussian distributions with very large standard deviations.  In the actual simulations, we will also insert larger random deviations and offsets to help further model the price insensitivity.


Evidently, under these new functions, the buying and selling probabilities remain essentially stuck around 50%, regardless of the PE ratio.  The functions are only minimally negatively-sloping and positively-sloping.  What this means qualitatively is that buyers and sellers don’t care much about the PE ratio, or any other factor related to price.  Price is not a critical consideration in their investment decision-making process.  They will accept whatever price they can get in order to take on or avoid the desired or unwanted exposure.

Now, let’s run the simulation on the assumption that the cash supply grows at 10%, while the earnings stay constant.


Here, the outcome changes significantly.  The index price, shown in blue, separates from the earnings, and instead tracks with the growing cash supply, shown in red.  Instead of holding at 15, the PE ratio, shown in green, steadily expands, from 20 in 2014 to roughly 65 in 2028.  All of the market’s “growth” ends up being the result of multiple expansion driven by the growth in the cash supply–growth in the amount of cash “chasing” the limited amount of shares.  Now, there is still some valuation sensitivity, which is why the index price fails to fully keep up with the rising cash supply.  The valuation sensitivity acts as a slight headwind.

Now, let’s run the simulation on the assumption that the earnings grow at 10%, but the cash supply shrinks by 10%.


Once again, the price tracks with the contracting supply of cash, not with the growing earnings.  Consequently, the PE ratio falls dramatically–from 20 down to 1.25.

Supply Manipulations in a Live Experiment

Everything that we’ve presented so far is theoretical.  We don’t have a buy-sell probability function for real buyers and sellers that we could use to determine the prices that their behaviors will produce in a market with a growing supply of cash and fluctuating earnings. Even if we could come up with such a function, it would not be useful for making actual price predictions, as it would contain far too many fuzzy and hard-to-measure variables, and would always be changing in unpredictable ways.

At the same time, the modeling that we’re doing here is useful in that it allows us to think more clearly about the way that supply factors interact with buying and selling probability factors to determine price.  When confronted with questions about the impact of supply factors in specific market circumstances, the best approach to evaluating these questions is to explore the kinds of buying and selling probabilities that those circumstances will lend themselves to–that is, the kind of buy-sell probability functions the circumstances will tend to produce.

If the circumstances will tend to produce significant price and value sensitivity–that is, sharply negatively-sloping buying probabilities and sharply positively-sloping selling probabilities, as a function of price–then supply will not turn out to be a very important or powerful factor in determining price.  As supply differences lead to price changes, the number of people that want to buy and sell at the given price will quickly adjust, arresting the price changes and stabilizing the price.

But if the circumstances will tend to lend themselves to price and valuation insensitivity–that is, flatly-sloping buying and selling probabilities, or worse, reflexive buying and selling probabilities, buying probabilities that rise with rising prices, and selling probabilities that rise with falling prices–then supply as a factor will prove to be very important and very powerful.  As supply differences emerge and cause price changes, the number of people that want to buy and sell at the given price will not adjust as needed, causing the price to continue to move, the momentum to continue to carry.

With this in mind, let’s qualitatively examine a famous genre of experiments that economists have performed to test the impact of supply on price.  In these experiments, a large closed group of market participants are endowed with a portfolio of cash or stock, and are then left to trade the cash and stock with each other.


The shares of stock pay out a set quantity of dividends on a scheduled periodicity throughout the scenario, or at the end, and then they expire worthless.  Each dividend payment equals some constant value, plus a small offset that is randomly computed in each payment period.

At any time, it’s easy to calculate what the intrinsic value of a share is.  It’s the sum of the expected future dividend payments up to maturity, which is just the number of dividend payments that are still left to be paid, times the value of each payment.  The offset to the payments is random, it acts in both directions, therefore it effectively drops out of the analysis.  Granted, the offsets insert an “uncertainty” into the value of the shares, the undesirability of which investors might choose to discount.  But the uncertainty is small, and the participants aren’t that sophisticated.

Before the experiment begins, the experimenters teach the participants how to calculate the intrinsic value of a share.  The experimenters then open the market, and allow the participants to trade the assets with each other (through a computer).  Crucially, whatever amount of money the participants end up with at the end of the experiment, they get to keep.  So there is a financial incentive to trade and invest intelligently, not be stupid.

The experiment has been run over and over again by independent experimenters, incorporating a number of different individual “tweaks.”  It’s been run on large groups, small groups, financially-trained individuals, non-financially-trained individuals, over short time periods, long time periods, with margin-buying, without margin-buying, with short-selling, without short-selling, and so on.

The experiments consistently produce results that defy fundamentals, results in which prices deviate sharply from fair value, when in theory they shouldn’t.  Shown below is a particularly egregious example of the deviation, taken from an experiment run on 304 economic students at Indiana University consisting of a 15 round trading period that lasted 8 weeks:


As you can see, the price deviates sharply from intrinsic value.  In the early phases, the buyers lack courage to step up and buy, so the price opens below fair value.  As the price rises, the buyers gain confidence, and more and more try to jump on board.  This process doesn’t stop when the limits of fair value are reached; it keeps going.  Buyers throw caution to the wind, and push the market into a bubble.  The bubble then bursts.  As the maturity nears, the price gravitates back towards intrinsic value.

If we think about the experiment, it’s understandable that this outcome would occur, at least in certain circumstances. “As long as the music is playing, you have to get up and dance.” Right?  Valuation is important only to the extent that it impacts price on the time horizons that investors are focused on.  In the beginning of the experiment, the investors are not thinking about what will happen at the end of the experiment, which is many months away.  They are thinking about what price they will be able to sell the security for in the near term.  They want to make money in the near-term, do what the other successful people in the game seem to be doing.  As they watch the price travel upward, above fair value, they start to doubt whether valuation is something that they should be focusing on. They conclude that valuation doesn’t “work”, that it’s a red herring, that focusing on it isn’t the way you’re supposed to play the game.  So they set it aside, and focus on trying to profit from the continued momentum instead.  In this way, they contribute to the growing excesses, and help create the eventual bubble.

As the security gets closer to its maturity, more and more participants start worrying about valuation.  It can’t be ignored forever, after all, for the bill’s eventually going to come due. And so as the experiment draws to a close, the price falls back to fair value.

Now, the question that we want to ask is, if we change the aggregate supply of cash in this experiment relative to the supply of shares, what will happen?  Of course, we already know the answer.  The valuation excesses will grow, multiply, inflate.  The buyers, after all, have demonstrated that they are not value sensitive–if they were, they wouldn’t let the price leave the fair value range.  As the price rises in response to the supply imbalances, the buyers aren’t going to pull back, and the sellers aren’t going to come forward–therefore, the imbalances aren’t going get relieved.  The price will keep rising until something happens to shift the psychology.

Interestingly, one practical finding from the experiment is that the most effective way to arrest the excess is to reduce the supply of cash relative to the supply of shares. When you reduce the supply of cash, the bubbles have a much more difficult time forming and gaining traction.  Sometimes, they don’t form at all.  Central Banks of the world, take note!

Now, some have objected to the results of the experiments, arguing that the participants often don’t understand how the maturity process works–that they often don’t recognize, until late in the game, that the security is going to expire worthless.  Put differently, the participants wrongly envision the dividends as investment returns on a perpetual security, rather than as returns of capital on a decaying security.  For our purposes, this potential flaw in the experiment doesn’t really matter, for even if the value of the security is misunderstood, that alone shouldn’t cause supply changes to appreciably impact prices. Supply should only appreciably impact prices if investors are not paying attention to value. Evidently, they aren’t.

A potentially more robust version of the experiment is one where there are no interim dividends, but only a single final payment, a single return of capital, paid to whoever owns the shares at the end.  In this version of the experiment, it’s painfully obvious what the security is worth, there is no room for confusion.  The security is worth the expected value of the final payment.

Professor Gunduz Caginalp of the University of Pittsburgh ran the experiment under this configuration, allowing groups of participants to trade cash and shares that pay an expected value of $3.60 at maturity (the actual value has a 25% chance of being $2.60, a 25% chance of being $4.60, and a 50% of being $3.60).  In one version, he kept the supply of cash roughly equal to the supply of shares, in another version, he roughly doubled the supply of cash.  He then ran each version of the experiment multiple times on different groups of participants to see whether the different versions of the experiment produced different prices.  The following chart shows the average price evolution for each version:


As you can see, the version in which the supply of cash is twice the supply of shares (blue line) produces prices that are persistently higher than the version in which the supply of cash equals the supply of shares.  This is especially true in the early trading rounds of the experiment–as the experiment draws to an end, valuation sensitivity increases, and the average prices of the two versions converge.

Interestingly, in the later rounds, the market in the high cash scenario seems to have an easier time moving the price to fair value than in the low cash scenario.  In the low cash scenario, a meaningful discount to fair value remains right up until the last few rounds, a discount that defies fundamental justification (why should the price be roughly $2.75 in round 12 when there is a 75% change of the price being substantially higher, and essentially a 0% chance of the price being lower, at maturity?).  This peculiarity illustrates the previous point that even when valuation is the dominant consideration for market participants, even when the market in aggregate is trying to move the price to fair value, supply still matters–it can nudge the market in the right or wrong direction.

It turns out that the only consistently reliable way to prevent an outcome in which individuals push prices in the experiment out of the range of fair value is to run the experiment on the same subjects multiple times–then, the investors learn their lessons. They start paying attention to valuation.

Evidently, the perceived connection between valuation and investment returns–the connection that leads investors to care about value, and to use it in their investment processes–is learned through experience, at least partially.  To reliably respect valuation, investors often need to go through the experience of not respecting it, buying too high, and then getting burned.  They need to lose money.  Then, valuation will become important, something to worry about.  Either that, or investors need to go through the experience of buying at attractive prices and doing well, making money, being rewarded.  In response to the supportive feedback, investors will grow hungry for more value, more rewards.

As with all rules that investors end up following, when it comes to the rule “though shalt respect value”, the reinforcement of punishment and reward, in actual lived or observed experience, cements the rule in the mind, and conditions investors to obey it.

Posted in Uncategorized | Comments Off

Why is the Shiller CAPE So High?

shillerd3Why is the Shiller CAPE so high?  In the last several weeks, a number of prominent academics and financial market commentators have attempted to answer this question, to include the inventor of the valuation measure himself, Nobel Laureate Robert Shiller.  In this piece, I’m going to attempt to give a clear answer.

The piece has five parts:

  • In the first part, I’m going to explain why valuations in general are higher than they have been historically.  It’s not just the CAPE that’s historically elevated; the simple TTM P/E ratio is also historically elevated, by a reasonably large amount.
  • In the second part, I’m going to highlight the main reason that the Shiller CAPE has risen relative to the simple TTM P/E over the last two decades: high real EPS growth. I’m going to introduce a schematic that intuitively illustrates why high real EPS growth produces a high Shiller CAPE.
  • In the third part, I’m going to explain how reductions in the dividend payout ratio have contributed to high real EPS growth.  In discussing the dividend payout ratio, I’m going to present a different, potentially more accurate formulation of the Shiller CAPE, a formulation that conducts the calculation based on total return instead of price.  On this formulation, the Shiller CAPE falls by around 10%, from 26.0 to 23.5.
  • In the fourth part, I’m going to explain how a secular uptrend in profit margins has contributed to high real EPS growth over the last two decades.  This effect is the most powerful of all, and is the main reason why the Shiller CAPE and the TTM P/E have diverged in their valuation signals.
  • In the fifth part, I’m going to outline a set of possible future return scenarios that investors at current valuations can reasonably expect.  I’m then going to identify the future return scenario that I find most credible.

Higher P/E Valuations Generally

It’s important to note at the outset that the Shiller CAPE isn’t the only price-to-earnings (P/E) metric that is currently elevated. The good-old-fashioned trailing twelve month (TTM) P/E ratio is also elevated. With the index at 2000 and 2Q TTM reported earnings per share (EPS) at 103.5, the current TTM P/E is 19.3 (the number doesn’t change much if we use use TTM operating earnings, since the economy is in expansion, and writedowns are no longer a big impact). The historical average for the TTM P/E is 14.6. So, on a simple TTM P/E basis, the market is already 33% above its historical average.

Note that I did not say that the market is 33% “overvalued”–to call the market “overvalued” would be to suggest that it shouldn’t be at the valuation that it’s at.  This is too strong.  Not only is it possible that the market should be at its current valuation, it’s also possible that the market should be at a still higher valuation, and that it’s headed to such a valuation.

Now, to the crucial point that market moralists consistently miss.  The market’s valuation does not arise out of the application of any external standard for what “should” be the case. Rather, the market’s valuation arises as an inadvertent byproduct of the equilibriation of supply and demand: the process through which the quantity of equity being supplied by sellers achieves an equilibrium with the quantity of equity being demanded by buyers.  In a liquid market, the demand for equity must equal the supply on offer.  “Price” is the factor that changes so as to cause the two to equal.  In a normal, well-anchored market, higher prices lead to reduced demand and increased supply on offer, and lower prices lead to increased demand and reduced supply on offer.  If, at a given market price, the demand for equity exceeds the supply on offer, the market price will rise, which will lower the demand and increase the supply on offer, pulling the two back into equilibrium. Similarly, if, at a given market price, the demand for equity falls short of the supply on offer, the market price will fall, which will increase the demand and reduce the supply on offer, again pulling the two back into equilibrium.

Right now, the price necessary to bring the demand for equity into equilibrium with the supply on offer happens to be higher, relative to earnings, than the price that successfully achieved the same equilibrium in the past.  In a prior piece, I laid out a number of possible reasons for this shift.  The most important reason has to do with expectations about future interest rates. Right now, the market’s expectation is that future interest rates will be low–less than 2%, on average–for the next several decades, and maybe for the rest of time.

The interesting thing about markets is that investors in aggregate have to hold every asset in existence, including what is undesirable–in this case, low-return cash and fixed income. Obviously, investors are not going to want to hold low-return cash and fixed income in lieu of equities unless they expect that: (1) equities at current prices will also offer low future returns on the relevant long-term horizons, or (2) catalysts will emerge that will lead other investors to focus on the short-term and sell, leaving behind painful mark-to-market losses that those who are stuck in the market will have to endure, and, conversely, affording exciting “buying opportunities” that those who are out of the market will get to capitalize on.

We are at a point in the economic cycle where the fear of (2) on the part of those invested, and the hope for (2) on the part of those on the sidelines, is fading.  As the economy strengthens in the presence of highly supportive Fed policy–policy that everyone knows will remain supportive for as far as the eye can see–those that are invested in the market are becoming less and less afraid of corrections, and those on the sidelines are growing more and more frustrated waiting in vain for them to happen.  Crucially, those on the sidelines sense the growing confidence levels of their fellow investors, and are increasingly resigning themselves to the fact that the kinds of catalysts that might break that confidence, and produce meaningfully lower prices, are unlikely to emerge in the near term.  Consequently, the market is slowly and painfully being pushed upward into the first condition, a condition where equity valuations rise until investors become sufficiently disenchanted with them that they willingly settle for holding low return cash and fixed income instead–not briefly, in anticipation of a correction that is about to happen, but for the long haul.

Some would say that market prices have gone too far, and that equities are now offering no excess return relative to cash and fixed income–or even worse, a negative excess return.   But those that reach this conclusion are estimating long-term equity returns using a method that makes aggressive assumptions about the trajectory of future profit margins, assumptions that will probably prove to be incorrect, if recent experience is any indication of what’s coming.

Real EPS Growth: Impact on the Shiller CAPE

Returning to the Shiller CAPE, its current value is 26.0.  Its long-term historical average (geometric) is 15.3.  On a Shiller CAPE basis, the market is 70% above its long-term historical average.  It follows that almost half of the Shiller CAPE’s current elevation, 33% out of the overall 70%, can be attributed to the elevation of the simple TTM P/E measure.

This fact usually gets missed in discussions about the CAPE because market participants tend to analyze the market’s valuation in terms of forward earnings estimates.  On the most recent estimates for year-end 2015, the market’s P/E is 15.1, a number almost perfectly in-line with the historical average.  But this number is pure fantasy.


For the number to actually be achieved, the S&P will need to generate $132.30 in reported earnings for 2015–a growth of almost 30% over the next 16 months, off of earnings and profit margins that are already starting at extreme highs.  How exactly will this supergrowth be achieved?  Will S&P 500 revenues–and the overall U.S. GDP which they track–see 30% nominal growth over the next year and a half?  Are profit margins going to rise by 30%, from 10% to 13%?  Macroeconomically, the estimate makes no sense.

Now, let’s compare the valuation signal of the Shiller CAPE to the valuation signal of the simple TTM P/E across history.  The following chart shows the percent difference between the CAPE valuation signal (the ratio of the CAPE to its historical average) and the TTM P/E valuation signal (the ratio of the TTM P/E to its historical average) from 1881 to 2014:


When the blue line is positive, the CAPE is calling the market more expensive than the TTM P/E.  When the blue line is negative, the CAPE is calling the market cheaper than the TTM P/E.  Right now, the CAPE is calling the market more expensive than the TTM P/E, but not by an extreme amount–the difference between the two metrics is in-line with the difference seen during other periods of history.

With the exception of the large writedown-driven gyrations of the last two recessions, you can see that over the last two decades, the CAPE has consistently called the market more expensive than the TTM P/E.  But that hasn’t always been the case.  For much the 1980s and early 1990s, the tables were turned; the CAPE depicted the market as being cheaper than the TTM P/E.

Now, why does the CAPE sometimes depict the market as more expensive than the ttm P/E, and sometimes cheaper?  The main reason has to do with the rate of real EPS growth over the trailing ten year period.  Recall that the Shiller CAPE is calculated by dividing the current real price of the index by the average of each month’s real TTM EPS going back 10 years (or 120 months).  When the real TTM EPS has grown significantly over the trailing 10 year period, this average tends to deviate by a larger amount from the most recent value, the value that is used to calculate the TTM P/E.

The point can be confusing, so I’ve attempted to concretely illustrate it with the following schematic:


Consider the high real growth scenario on the left.  Real EPS grows from $100 to $200 over a ten year period.  The average of real EPS comes out to $150, relative to the most recent real TTM EPS number of $200.  The difference between the two, which drives the difference between the valuation signals of the CAPE and the TTM P/E, is high, around 33%.

Now, consider the low real growth scenario on the right.  Real EPS grows from $100 to $110 over a ten year period.  The average of real EPS comes out to $105, relative to the most recent real TTM EPS number of $110.  The difference between the two, which drives the difference between the valuation signals of the CAPE and the TTM P/E, is low, around 5%.

As you can see, on a Shiller CAPE basis, the market ends up looking much cheaper in the low real growth scenario than in the high real growth scenario, even though the valuation is the same on a TTM basis.  This result is not in itself a mistake–the purpose of the CAPE is to discount abnormal EPS growth that is at risk of being unwound going forward.

To further confirm the relationship, consider the following chart, which shows the percent difference between the valuation signals of the CAPE and TTM P/E (blue) alongside the real EPS growth rate of the prior 10 years (red):


As expected, the two lines track very well.  In periods of high real EPS growth, the market ends up looking more expensive on the CAPE than on the TTM P/E.  In periods of negative real EPS growth, the market ends up looking less expensive on the CAPE than on the TTM P/E.

Over the last two decades, the S&P 500 has seen very high real EPS growth–6% annualized from 1992 until today.  For perspective, the average annual real EPS growth over the prior century, from 1871 to 1992, was only 1%.  This rapid growth, along with changes to goodwill accounting standards that severely depressed reported earnings during and after the last two recessions (the latter of which is now out of the trailing ten year average, and no longer affecting the CAPE), explains why the CAPE has been high relative to the TTM P/E.

But why has real EPS growth been so high over the last two decades?  Before we explore the reasons, let’s appraise the situation with a simple chart of real TTM reported EPS for the S&P 500 from 1962 to present, with the period circa 1992 circled in red:

real ttm eps

Surprisingly, from 1962 to 1992, real TTM EPS growth was zero.  For literally 30 years, the S&P produced no real fundamental return, outside of the dividends that it paid out. But since then, real EPS growth has boomed.  From 1992 until 2014, S&P earnings have quadrupled in real terms.  Why has real EPS growth picked up so much in the last two decades?  There are two main reasons, which we will now address.

Changes in the Dividend Payout Ratio

The first reason, which is less impactful, has to do with changes in the dividend payout ratio.  Recall from a prior piece that dividends and growth are fungible.  If the corporate sector lowers its dividend payout ratio to fund increased internal reinvestment (capex, M&A, buybacks), real EPS growth will rise.  If it lowers its internal reinvestment (capex, M&A, buybacks) to fund an increase in dividends, real EPS growth will fall.  Assuming that the market is priced at fair value, and that the return on equity stays constant over time, the effects of the change will cancel, so that shareholders end up with the same return.

The chart below, from a prior piece, illustrates the phenomenon.  Over the long-term, the real return contribution from dividends (green) can rise or fall, but it doesn’t matter–the return contribution from real EPS growth (gold) shifts to offset the change, and keep the overall shareholder return constant (historically around 6%, assuming prices start out at fair value).

70yr Trailing 6%

Now, we know that the dividend payout ratio for US equities has fallen steadily since the late 19th century, and therefore we should expect real EPS growth now to be higher than in the past.  The following chart shows the trailing 10 year average dividend payout ratio for the S&P 500, from 1881 to 2014:


But how much of a difference does the change in the dividend payout ratio make, as far real EPS growth and the Shiller CAPE are concerned?  The question is hard to answer. One thing we can do to get an idea of the size of the difference is to build a CAPE using a total return index instead of a price index.  Using a total return index instead of a price index puts all dividend payout ratios on the same footing.

The following chart shows the Shiller CAPE constructed using a total return index (blue) instead of a price index (red), from 1891 to 2014:

spx shillerdivp

[Details: The Total Return Shiller CAPE is constructed as follows.  Start with 1 share of the S&P 500 at the beginning of the data set.  Reinvest the dividends earned by that share, and each subsequent share, as they are paid out.  The result will be an index of share count that grows over time.  To calculate the Total Return Shiller CAPE, take the current real price times the current number of shares, and divide that product by the average of the real price times the number of shares that were owned in each month, going back 10 years or 120 months.  Then normalize the result for apples-to-apples numeric comparison with the original Shiller CAPE.]

[Note: The flaw in this measure is that it quietly rewards markets that are overvalued and quietly punishes markets that are undervalued.  The dividend reinvestment in overvalued markets gets conducted at less accretive prices than the dividend reinvestment in undervalued markets, causing the metric to shift slightly in the lower direction for overvalued markets, and slightly in the upward direction for undervalued markets.  To address this problem, we could hypothetically conduct the dividend reinvestments at “fair value” instead of at the prevailing market price–but we don’t yet have an agreed-upon way of measuring fair value!  We’re trying to build such a measure–a measure that appropriately reflects the impact of dividend payout ratio changes.]

With the S&P at its current level of 2000, the Total Return Shiller CAPE comes in at around 23.5, 10% below the original Shiller CAPE, which is currently at 26.0.  A 10% difference isn’t huge, but it still matters.

Changes in the Profit Margin

The bigger factor underlying the strong growth in real EPS over the last two decades, and the associated upward shift in the Shiller CAPE relative to the TTM P/E, has been the trend of increasing profit margins, a trend that began in 1992, and that continues intact to this day.  To understand the powerful effect that changes in profit margins can have on real EPS growth, let’s take a moment to consider the drivers of aggregate corporate EPS growth in general.

There are three ways that the corporate sector can grow its EPS in aggregate:

  • Inflation: The corporate sector can continue to make and sell the same quantity of things, but sell them at higher prices.  If profit margins remain constant, then the growth will translate entirely into inflation.  There will not be any real income growth of any kind–no real EPS growth, no real sales growth, no real wage growth–because the price index will have shifted by the same nominal amount as each type of income.
  • Real Sales Growth: The corporate sector can make and sell a larger quantity of things at the same price.  If profit margins remain constant, the result will be real growth in each type of income: real EPS growth, real sales growth, and real wage growth.  Each type of income will rise proportionately amid a constant price index, allowing the lot of every sector of the economy to improve in a real, sustainable manner.
  • Profit Margin Shift: The corporate sector can make and sell the same quantity of things at the same price, but then claim a larger share of the income earned from the sale. The shift will show up entirely as real EPS growth, but with no real sales growth, and negative real wage growth–”zero-sum” growth for the larger economy.

[Note: the corporate sector can also grow its nominal EPS by shrinking its outstanding share count through M&A and share buybacks.  But this "float shrink" needs to be funded.  If it is funded with money that would otherwise have gone to dividends, then we're back to the fungibility point discussed earlier--on net, shareholders will not benefit.  If it is funded from money that would otherwise go to capex, then the effects of the reduction in share count will be offset by lower real earnings growth, and shareholders again will be left no better off.  If it is funded with an increased accumulation of debt--a "levering up" of corporate balance sheets--the assumption is that there will be a commensurate payback when the credit cycle turns, a payback in which dilutions, unfavorable financing agreements, and defaults undo the accretive effects of the prior share count reduction.  This story is precisely the one that unfolded from 2004 to 2008, and then from 2008 to 2010--a levered M&A and buyback boom significantly reduced the S&P share count, and then the dilutions of the ensuing recession brought the share count back to roughly where it began.]

In reality, aggregate corporate EPS tends to evolve based on a combination of all three processes occurring at the same time.  Some inflation, some real sales (output) growth, and some shift in the profit margin (cyclical or secular–either can occur, since profit margins are not a reliably mean-reverting series).  The important point to recognize, however, is this: real sales growth for the aggregate corporate sector (real increases in the actual quantity of wanted stuff that corporations make and sell, as opposed to inflationary growth driven by price increases) is hard to produce in large amounts, particularly on a per share, after-dilution basis.  For this reason, absent a profit margin change, it’s difficult for real EPS to grow rapidly over time.  Wherever rapid real EPS growth does occur, a profit margin increase is almost always the cause.

Not surprisingly, the real EPS quadrupling that began in 1992, and that that has caused the Shiller CAPE to substantially increase in value relative to the TTM P/E, has primarily been driven by the profit margin upshift that started in that year and that continues to this day.  In much the same way, the zero real EPS growth that investors suffered from 1962 to 1992, and that caused the market of the 1980s and early 1990s to look cheaper on a Shiller CAPE basis than on a TTM P/E basis, was driven primarily by the profit margin downshift that took place during the period.

The following chart shows the net profit margin of the S&P 500 on GAAP reported earnings from 1962 to 2014, with the period circa 1992 circled in red:


The following chart superimposes real EPS (green) onto the profit margin (blue):


As you can see, profit margins began the period in 1962 at almost 7%, and bottomed in 1992 at less than 4%, leaving investors with zero real EPS growth over a period of roughly thirty years.  From 1992 until today, profit margins rose from 4% to 10%, leaving investors with annualized real EPS growth of 6%, more than three times the long-term historical average (1871-2014), 1.8%.

Valuation bears have been warning about “peak profit margins” for four years now (and warned about them in the last cycle as well).  But profit margins keep rising.  In this most recent quarter, they reached a new record high, on top of the record high of the previous quarter, on top of the record high of the quarter before that.  What’s going on?  When is this going to stop, and why?

Nobody knows the answer for sure–certainly not the valuation bears who have continually gotten the call wrong.  But even the valuation bulls will have to acknowledge that the profit margin uptrend seen over the last two decades can’t go on forever.  It will have to eventually peter out–probably sooner rather than later.  If and when that happens, real EPS growth will be limited to the contributions of real sales growth from reinvestment and float shrink from M&A and share buybacks.  Neither phenomenon is capable of producing the kind of rapid real EPS growth that the S&P has seen over the last two decades (especially not the M&A and buybacks, which are occurring at lofty prices), and therefore the rate of real EPS growth should moderate, and the divergence between the Shiller CAPE and the TTM P/E should narrow.

Valuation: A Contingent Approach

In a prior piece, I argued that profit margins are the epicenter of the valuation debate.  All of the non-cyclical valuation metrics that purport to show that the market is egregiously overvalued right now rely on aggressive assumptions about the future trajectory of profit margins, assumptions that probably aren’t going to come true.  You can add the Shiller CAPE to that list, since its abnormal elevation relative to the TTM P/E is tied to the increase in profit margins that has occurred since the early-to-mid 1990s.

When investors discuss valuation, they often approach the question as if there were an objective, determinate answer.  But there isn’t.  At best, valuation is a contingent judgement–a matter of probabilities and conditionalities: “if A, then B, then C, then the market is attractively valued”, “if X, then Y, then Z, then the market is unattractively valued.”  There are credible scenarios where the current market could end up producing low returns (and therefore be deemed “expensive” in hindsight), and credible scenarios where it could end up producing normal returns (and therefore be deemed “cheap” in hindsight, particularly relative to the alternatives).  It all depends on how the concrete facts of the future play out, particularly with respect to earnings growth and the market multiple.  That’s why it’s often best for investors to just go with the flow, and not fight trends based on tenuous fundamental analysis that will just as often prove to be wrong as prove to be right.

With respect to the market’s current valuation and likely future return, let’s dispassionately examine some of the possibilities:

Possibilty #1: Moderately Bullish Scenario

The increase in profit margins that we’ve seen from the mid 1990s until now is retained going forward.  The increase doesn’t continue, but it also doesn’t reverse.  On this scenario, the market’s return will be determined by the fate of the P/E multiple.

At 19.3 times reported TTM earnings, and 17.9 times operating TTM earnings, the market’s P/E multiple is clearly elevated on a historical basis. But it doesn’t immediately follow that the market will produce poor returns going forward, because the multiple might stay elevated.

The most likely scenario in which profit margins hold up is one where where the corporate sector continues to recycle its capital into M&A, share buybacks, and dividends, while shunning expansive investment.  Generally, expansive investment brings about increased inter-firm competition and increased strain on the labor supply, both of which exert downward pressure on profit margins.  In contrast, capital recycling that successfully displaces expansive investment tends to bring about reduced inter-firm competition and reduced strain on the labor supply, both of which exert upward pressure on profit margins. The latter point is especially true of M&A, which has the the exact opposite effect on competition as expansive investment.

In a low-growth, low-investment, high-profit-margin world, where incoming capital is preferentially recycled into competition-killing M&A and float-shrinking share repurchases, rather than deployed into the real economy, interest rates will probably stay low.  The frustrated “reach for yield” will remain intact, keeping the market’s P/E elevated (or even causing it to increase further).  If the market’s P/E stays elevated, there is no reason why the market can’t produce something close to a normal real return from current levels–a return on par with the 6% real (8% to 10% nominal) that the market has produced, on average, across its history.  Relative to the opportunities on offer in the cash and fixed income spaces, such a return would be extremely attractive.

Now, even if the current market–at a TTM P/E of 19.3 times reported earnings and 17.9 times operating earnings–is set to experience multiple contraction and lower-than-normal future returns, it doesn’t follow that the market’s current valuation is wrong.  The market should be priced to offer historically low returns, given the historically low returns that cash and fixed income assets are set to offer over the next several decades.  Indeed, if the market were not currently priced for historically low returns, then something would be wrong.  Investors would not be acting rationally, given what they (should) know about the future trajectory of monetary policy.

Possibility #2: Moderately Bearish Scenario

The increase in profit margins is not going to fully hold.  Some, but not all, of the profit margin gain will be given back.  On this assumption, it becomes harder to defend the market’s current valuation.

Importantly, sustained reductions in the profit margin–as opposed to a temporary drops associated with recession–tend to occur alongside rising sales growth.  In terms of the effect on EPS, rising sales growth will help to make up for some of the profit that will be lost.  However, almost half of all sales growth ends up being inflation–the result of price increases rather than real output increases.  With inflation comes lower returns in real terms (the only terms that matter), and also, crucially, a tighter Fed.  If the Fed gets tighter, a TTM P/E of 19.3 will be much harder to sustain.  The market will therefore have to fight two headwinds at the same time–slow EPS growth due to profit margin contraction and a return drag driven by multiple contraction.  Returns on such a scenario will likely be weak, at least in real terms.

But they need not be disastrously weak.  In a prior piece, I argued that returns might end up being 5% or 6% nominal, or 3% or 4% real.  Of course, that piece assumed a starting price for the S&P 500 of 1775.  Nine months later, the index is already at 2000.  The estimated returns have downshifted to 3% or 4% nominal, and 1% or 2% real.  Such returns offer almost no premium over the returns on offer in the much-safer fixed income world, and therefore, if any kind of profit margin contraction is coming, then the current market is probably pushing the boundaries of defensible valuation.

Possibility #3: Aggressively Bearish Scenario

Profit margins are going to fully revert to the pre-1990s average.  On this assumption, the market is obscenelyoutrageously expensive.  If, at a profit margin of 9% to 10%, EPS comes in at $103.5, and if profit margins are headed to the pre-1990s average of 5% or 6%, then the implication is that EPS is headed to around $55 (a number that will be adjusted upward in the presence of sales growth and inflation–but only as time passes).  Instead of a historically elevated TTM P/E of 19, the market would be sitting at a true, normalized TTM P/E of around 36.

Obviously, if margins and earnings were to suddenly come apart, such that the S&P at 2000 shifts from being valued at 19 times earnings to being valued at 36 times earnings, as opposed to the “15 times forward” that investors think they are buying into, prices would suffer a huge adjustment.  If the shift were to happen quickly, over a short number of months or quarters, the market would almost certainly crash.

But even if the shift were to happen very slowly, such that EPS simply stagnates in place, without falling precipitously, real returns over the next decade, and maybe even the next two or three decades, would still end up being very low–zero or even negative.  The profit margin contraction would eat away at real EPS growth, as it did from the 1960s until the 1990s.  Even nominal returns over various relevant horizons might end up being zero or negative.

Possibility #4: Aggressively Bullish Scenario

Profit margins are going to continue to increase.  Now, before you viscerally object, ask yourself: why can’t that happen?  Why can’t profit margins rise to 12% or 14% or even higher from here?  The thought might sound crazy, but how crazy would it have sounded if someone were to have predicted, in 1992, with profit margins at less than 4%, that twenty years later profit margins would be holding steady north of 10%, more than 200 basis points above the previous record high?

If profit margins are set to continue their upward increase, then the market might actually be cheap up here, and produce above average returns going forward.  The same is true if P/E multiples are set to continue their rise–a possibility that should not be immediately dismissed.  As always, the price of equity will be decided by the dynamics of supply and demand.  So long as we continue to live in a slow growth world aggressively backstopped by ultra-dovish Fed policy, a world where investors want and need a decent return, but can only get one in equities, there’s no reason why the market’s P/E multiple can’t get pushed higher, to numbers above 20, or even 25.  It certainly wouldn’t be the first time.

Going forward, all that is necessary for such an outcome to be achieved is for investors to experience a re-anchoring of their perceptions of what is “appropriate”–become more tolerant, less viscerally afraid, of those kinds of valuation levels.  If the present environment holds safely for a long enough period of time, such a re-anchoring will occur naturally, on its own. Indeed, it’s occurring right now, as we speak.  Three years ago, nobody would have been comfortable with the market at 2000, 19 times trailing earnings. People were acclimatized to 12,  or 13, or 14, as “reasonable” multiple, and were even seriously debating whether multiples below 10 were going to become the post-crisis “new normal.”  The psychology has obviously shifted since then, and could easily continue to shift.

As for me, I tend to lean towards option #2: a moderately bearish outcome.  I’m expecting weak long-term returns, with some profit margin contraction as labor supply tightens, and some multiple contraction as Fed policy gets more normal–but not a return to the historical averages.  Importantly, I don’t foresee a realization of the moderately bearish outcome any time soon.  It’s a ways away.

I expect the market to eventually get slammed, and pay back its valuation excesses, as happens in every business cycle.  If this occurs, it will occur in the next recession, which is when valuation excesses generally get paid back–not during expansionary periods, but during contractions.  The next recession is at least a few years away, maybe longer, and therefore it’s too early to get bearish.  Before sizeable recession becomes a significant risk, the current expansion will need to progress further, so that more real economic imbalances are built up (more misallocations in the deployment of the economy’s real labor and capital resources), excesses that provoke rising inflation, and that get pressured by the monetary policy tightening that occurs in response to it.  In the meantime, I expect the market to continue its frustrating and painful grind higher, albeit at a slower pace, offering only small pullbacks in response to temporary scares.  Those who are holding out for something “bigger” are unlikely to be rewarded any time soon.

Given the headwinds, I think the long-term total return–through the end of the current business cycle–will be around 1% to 2% real, 3% to 4% nominal.  Poor, but still better than the other options on the investment menu.  An investor’s best bet, in my view, would be to underweight U.S. equity markets in favor of more attractively priced alternatives in Europe, Japan, and the Emerging Markets.

(h/t to the must-follow Patrick O’Shaughnessy @millennial_inv of OSAM for his valuable help on this piece)

Posted in Uncategorized | Leave a comment

Global Stock Market Valuation and Historical Real Returns Image Gallery

In this piece, I’m going to analyze the historical local currency real total returns of different stock markets around the world: 46 different large cap indices, 12 different small and mid (SMID) cap indices, and, for the U.S., 4 different style indices–growth, momentum, quality, and value.  For each index, I’m going to generate a chart that visually captures key trends and data points, shown below with the Hong Kong stock market as the example:


Theoretical background, and instructions for how to read and interpret the charts, are provided in the paragraphs below.  The charts are presented at the end.

Dividend Decomposition

We can decompose–separate, conceptually split apart–equity total returns into two components:

  • Dividend Return: the return due to dividends paid out and reinvested.
  • Price Return: the return due to changes in the market price

We can further decompose price return into two components:

  • Growth: the return due to growth in some chosen fundamental
  • Valuation: the return due to changes in market valuation measured in terms of that fundamental

To measure fundamental growth, we can choose any “fundamental” that we want, as long as the metric that we use to measure the change in valuation employs that same fundamental.  So, for example, we can decompose the price return into: the return due to earnings growth and the return due to the change in the price to earnings multiple.  Alternatively, we can decompose the price return into: the return due to book value growth and the return due to the change in the price to book multiple.  And so on.  We cannot, however, decompose the price return into: the return due to earnings growth and the return due to the change in the price to book multiple.  On such a decomposition, we would be mixing incompatible bases.

To conduct the decompositions, we’re going to use dividends as the fundamental.  We’re therefore going to decompose the real returns of each market into three components: the real return due to dividends paid out and reinvested at market prices, the real return due to growth of dividends, and the real return due to the change in the price to dividend multiple–which is just the inverse of the dividend yield.  Notice the aforementioned consistency between the fundamental and the valuation measure: we’re measuring fundamental growth in terms of dividend growth and changes in valuation in terms of changes in the price to dividend ratio.

We need to conduct the analysis in real terms–with each country’s returns adjusted for inflation–because inflationary growth is worthless to investors, yet it’s a significant driver of nominal equity returns–often, the most significant driver of all.  If we don’t adjust the returns for inflation, then the performance of the stock markets of high-inflation countries such as Hungary will appear vastly superior to the performance of the stock markets of low-inflation countries such as Germany, even though the performance is not any better in real terms.

There are three reasons why we’re going to use dividends as the fundamental, rather than earnings.  First, dividends are more stable across the business cycle than earnings.  Second, the accounting practices used to measure earnings are not the same across different countries, different periods of history, or even different phases of the business cycle.  But dividends are dividends–unambiguous, concrete, indisputable.   Third, for international markets, historical dividend data is more readily available than historical earnings data. MSCI provides total return and price return indices for all countries that have investible stock markets (available here).  In providing those indices, MSCI provides the materials necessary to back-calculate dividends across history.  We’re going to use the MSCI indices, which generally go back to 1971, along with international CPI data (available here), to conduct the decompositons.

Note that the dividends that we back-calculate in this way will be somewhat different from what you might see in an ETF or from an official indexing source–such as S&P or FTSE. As expected, the back-calculated dividends tend to closely match the dividend data on MSCI’s fact sheets, but even that data is not perfectly consistent with other sources.  Take any discrepancies lightly–it’s the big picture that we’re focused on here.

Now, the obvious problem with decomposing returns using dividends as the fundamental is that anytime the dividend payout ratio–the share of earnings that corporations pay out to their shareholders in the form of dividends–changes in a lasting, secular manner, we’re going to get distorted results.  Unfortunately, there’s no way to avoid this problem–we just have to deal with it.

Fortunately, most countries have maintained relatively consistent dividend payout ratios across history.  The U.S. is the obvious exception–but even with the U.S., the ensuing distortion isn’t too large, because in the historical period that we’re going to focus on–the early 1970s until the present day–dividend payout ratios haven’t changed by all that much.  The big downshift in payout ratios happened earlier, as you can see in the chart below, which shows smoothed dividends divided by smoothed earnings for the S&P 500:


Trailing Twelve Month Dividends and Peak Dividends

Like earnings, dividends are cyclical, though to a much lesser degree.  To smooth out their cyclicality, we’re going to conduct the price return decomposition using two different formulations of the dividend: first, the simple trailing twelve month (ttm) dividend, second, the peak dividend–the highest dividend paid out in any twelve month period at any time in the past.

The ttm dividend decomposition will separate the price return into ttm dividend growth and changes in the price to ttm dividend ratio. The peak dividend decomposition will separate the price return into peak dividend growth and changes in the price to peak dividend ratio.  Both decompositions will be presented in the charts so that the reader can compare them.

The best cases of hidden value are those where the ttm dividend yield is very low, but the peak dividend yield is very high.  A sharp divergence between the two suggests that the market is only taking into consideration the current dividend, which may be temporarily depressed, and is ignoring the past dividends that the corporate sector paid out–dividends that it may end up being able to pay out in the future, when the temporarily depressed conditions improve.


The chart to the left ranks the stock markets of the world in terms of peak dividend yield and ttm dividend yield as of June 30, 2014.  Looking at Greece in specific, on a simple ttm basis, the dividend yield at current prices is a paltry 2.41%–hardly a signal of value.  But if you look at the peak dividend yield, which uses the highest dividend in Greece’s history–the dividend paid in the ’06-’07 period–the dividend yield at current prices comes in at a whopping 23.35%.

Now, it may be the case that the performance that the Greek corporate sector exhibited in ’06-’07 does not accurately represent the performance it is likely to exhibit in the future, and that the long-term yield that Greece will offer at current prices will be significantly lower, even as conditions in  Greece improve.  But even if we assume that the long-term yield at current prices will be dramatically lower–say, 75% lower–that’s still a 6% yield, which is an excellent yield.

Familiar readers will note that the country’s that show up as cheap and expensive in terms of peak dividend yields also show up as cheap and expensive in Mebane Faber’s global CAPE analysis.  The two metrics are reasonably consistent in their signals, which is to be expected, given that they are reading the same valuation reality.  As a valuation metric, CAPE is admittedly superior to the peak dividend yield, but it’s also more costly–you have to use ten years to calculate it, which means that you lose 10 years from the analysis. For many of these countries–those that only have 20 or 30 years of history–10 years is a lot to lose.

The Goal of the Decomposition

So why are we doing this decomposition?  What’s the point?  The reason we do the decomposition is because it gives us a rough picture of how much of a given countries’ stock market performance has been driven by changes in valuation, and how much has been driven by fundamentals.  Changes in valuation are a fickle driver of long-term returns. If a country has outperformed because it has gone from cheap to expensive, or if it has underperformed because it has gone from expensive to cheap, then as investors we should want to go the other way–invest in the cheap country, and not invest in the expensive country.

But, crucially, if the country has outperformed because its corporate sector has exhibited consistently poor fundamental performance, then we don’t necessarily want to go the other way.  Warren Buffet reminds us that sound fundamental investing is not just about buying companies on the cheap, but about buying good companies on the cheap.  Bad companies on the cheap–value traps–are return killers.  This fact is just as true on the country level as it is on the individual stock level.

When we look at the data, we’re going to quickly notice that the corporate sectors of some countries have performed much better–produced substantially more fundamental growth for each unit of reinvestment that they’ve engaged in–than the corporate sectors of other countries.  This outperformance may have been the result of coincidental tailwinds that will not obtain going forward, and therefore extrapolating the outperformance into the future may prove to be a mistake.  But the outperformance may also be a sign of the inherent superiority of the corporate sectors of some countries relative to the corporate sectors of other countries.  If it is, then we should preferentially seek out the superior countries, and be willing to pay more to invest in them, and avoid the inferior countries, demand more to invest in them.  Valuation matters to returns, but it’s not the only thing that matters.

Countries with corporate sectors that invest inefficiently and excessively, and that don’t respect the rights and interests of their shareholders, don’t tend to produce strong returns, even when they are bought on the cheap.  Take the example of Russia–a country notorious for its corruption, corporate waste, poor governance, and disrespect for property rights. Since 1996, the Russian stock market has averaged an atrocious -5% annual real total return.  This disastrous performance was not the result of a high starting valuation–the dividend yield in 1996 was above 2%, higher than for the U.S.  Rather, the underpformance is a result of the fact that dividends haven’t grown by a single ruble in real terms since 1996, even though Russia hasn’t paid out very much in dividends along the way.  If the Russian corporate sector was earning profit and reinvesting that profit the entire time–a full 18 years–where did the profit go?  It certainly didn’t go to shareholders, as they have nothing to show for it.

On the other extreme, since 1996, the U.S. stock market has averaged a healthy 6% real total return.  This 6% has not been driven by any kind of irresponsible valuation expansion–in dividend terms, valuations were roughly the same in 1996 as they are today. Rather, it was the result of solid corporate performance.  The U.S. corporate sector produced a 2% annual real return from dividends paid out to shareholders, and an impressive 3.25% in annual real dividend growth.  Changes in the U.S. market’s valuation added an additional 0.75% annually to the real return, producing a real total return of 6%.

The Payout Ratio: Inevitable Messiness

Now, the picture we will get from the decomposition will be admittedly messy, for a number of reasons.  First, we aren’t incorporating the average dividend payout ratios of each country into the analysis, therefore we can’t assess the price to dividend ratio as a valuation signal.  Is a price to dividend ratio of 50–i.e., a dividend yield of 2%–high, and therefore expensive?  It may be for Austria, but not necessarily for Japan.  Not knowing the dividend payout ratio, all we can do is compare the country’s current valuation to its own history, on the assumption that dividend payout ratios haven’t changed in a long-term, secular manner (and, to be fair, they may have).

Moreover, because we don’t know the dividend payout ratio, we don’t know how much growth we should expect out of each country. Countries that are reinvesting a large share of their cash flows in lieu of paying out dividends should be producing large amounts of growth–those that are doing the opposite should not be.   Using the information currently available, we can’t necessarily tell the difference.

Most importantly, we don’t know what the factors are that have caused the corporate sectors of some countries to perform substantially better than others.  The factors could be cyclical, macroeconomic, demographic, era-specific, driven by differences in industry concentrations, investment efficiency, corporate governance, shareholder friendliness, unsustainable booms in productivity, and so on–we don’t know.  Therefore we can’t necessarily be confident in projecting the past superior performance out into the future.

Still, the picture we get will give us a minimally sufficient look at what’s going on.  It will tell us where things have been going well for shareholders in terms of the growth and dividends that have been produced for them, and where things have not been going well. That knowledge should be enough to get us started on the important task of figuring out where the corporate overachievers and the value traps might lie.

How to Interpret the Charts

Consider the following chart, which decomposes the performance of Ireland from January 1989 to June of 2014:


The bright yellow in the left column (0.17%) is the actual annual real total return for the period.  The green (-0.68%) and blue (5.90%) below that is the annual real return contribution from growth plus dividends–which, notice, is what the real return would have been if there had been no change in valuation (the third component of returns removed).  The green (-0.68%) uses a ttm dividend basis, and the blue (5.90%) uses a peak dividend basis.

The pink below that (2.36%) is the annual contribution to real returns from reinvested dividends over the period.  It is calculated by taking the difference between the annual real total return for the period and the annual real price return for the period.  Below that is the annual contribution of dividend growth (-3.02%, 3.75%) and change in valuation (0.85%, -5.73%), measured under each basis (green = ttm dividend basis, blue = peak dividend basis).  Below that is an internal checksum of sorts, in which the contribution of the valuation change is calculated by a wholely different method, to make sure that the analysis is roughly correct.  You can ignore it.

In terms of the graphs, the upper left graph is the real total return (green) and the real price return (gray).  The upper middle graph is ttm real dividends per share (dark blue) and historical peak real dividends per share (light blue).  The upper right graph is the historical yield using the ttm dividend (purple, with the yellow box above showing the ttm dividend yield as of June 2014, which is 1.67%) and the peak dividend (hot pink, with the yellow box abvoe showing the peak dividend yield as of June 2014, which is 9.26%).  To review, the peak dividend yield is what the yield would be at a given time if the dividend went back to what it was at the largest point in time up to that time–for Ireland, it’s very high, because the dividends that Ireland paid out in the last cycle were very high relative to Ireland’s current price, which is depressed–you can interpret this as a sign of cheapness for the Irish market).  The lower left graph is the Price to ttm Dividend ratio (orange) and Price to Peak Dividend ratio (red) ratio.  The lower middle graph is the 5 year (light green) and 10 year (bright blue) growth rates of the real ttm dividend.

The lower right graph is a running estimate of future 5, 7, and 10 year Shillerized real returns using the “real reversion” method laid out in a prior piece.  We calculate this estimate by discounting the market’s average real return to reflect a reversion from the present valuation to the average valuation.

Now, let me be fully up front with the reader.  Like all backward-looking return estimates that purport to fit with actual forward-looking results across history, this estimate involves cheating.  We are using information about the average return for the entire data set, past and future, and the average valuation for the entire data set, past and future, to estimate the future return from past moments in time.  From the perspective of those moments, the average of the entire data set includes information about the future that was not known then–therefore, in projecting a reversion to the average, our estimates are taking a peak at the future.  That is, they are cheating.

You’ll see that the method often nails it, with extremely high correlations with actual returns, well above 90%. But don’t take that to mean anything special. As I’ve emphasized elsewhere, these types of fits are very easy to build in hindsight, because you can effectively peak ahead at the actual results, and utilize now-determined information about the future that was not determined–or knowable in any way–at the time that the prediction would have needed to have been made.

I introduce the real reversion estimates into the chart not to make any sort of confident prediction about what the forward real return of any given market will actually be–such a prediction would require a look that goes much deeper than a backwards-looking curve fit that exploits cheating–but simply to give the reader an idea of when, in the market’s history, valuations were cheap and expensive relative to their averages for the period, and where they are now, relative to their historical averages.  The averages for the period in question may not end up being the averages of the future, and therefore they may not be relevant to the future.

The Charts With Associated Tables

Finally, to the charts.  What follows is a user-controlled slideshow of charts for different countries across different time periods of history.  If you click on any image, it will put you into that country’s part of the slideshow.

Before each chart, there’s a table, sorted by country name, that presents all of the results.  To view the table, simply click on it.  On the right side of each table, there’s a section on “non-equity growth”, which includes potentially relevant information on population growth, real GDP growth per capita, and real GDP growth over the period (source here).

Jan 1971 to Jun 2014: Developed Market Returns




Jan 1989 to Jun 2014: Developed and Emerging Market Returns




Jan 1989 to Jun 2014: US Growth, Momentum, Value, Quality





Jan 1994 to Jun 2014: Developed, Emerging and Frontier Market Returns




Jan 1996 to Jun 2014: Developed and Emerging Market Small and Mid Caps




1996 to 2014: Czech Republic, Egypt, Hungary, Russia





After examining the data, here are my conclusions:

  • Europe, in particular the PIIGS, are very cheap relative to their own valuation histories and relative to the valuations of other countries.  On a ttm dividend basis, the returns due to growth have been poor, but that’s because dividends have crashed to almost nothing in response to the crisis.  If dividends eventually return to where they were prior to 2008–a big if, but one worth considering–the returns of the European periphery countries, and of European countries in general, should be very attractive.
  • Japan’s underperformance since 1989 has been due primarily to an egregiously high starting valuation.  But this excess has been entirely worked off, and Japan now offers a higher dividend yield than the U.S.  Japan’s corporate performance, however, remains something of a mystery.  It’s hard to quantify Japan’s dividend payout ratio, and to therefore get an estimate of how much dividend growth should have occurred from 1989 to today, because Japanese earnings are significantly understated due to excessive depreciation.  If the Japanese dividend payout ratio relative to true earnings has been low, which it probably has been, then Japanese ROEs have been poor.  The country owes its shareholders more growth for the amount that it has allegedly “reinvested.”  If growth by capital investment is not possible in Japan given the aging, shrinking population demographics, and weak consumer demand, then earnings should be deployed into dividends and share buybacks.  If Abenomics manages to stimulate this outcome, and promote an associated improvement in shareholder yield, then Japanese equities should produce strong returns going forward.
  • Australia and New Zealand are reasonably valued, with healthy dividend yields.  But the commodity-centric earnings and dividends that they generated in the last cycle may not be sustained going forward.
  • The US is expensive relative to its own history and relative to other countries.  It’s among the most expensive stock markets in the world.
  • Emerging Markets are generally cheap, but, macroeconomically, it’s difficult to project their performance over the last 20 to 30 years out into the future.  Interesting countries to look at on valuation include Brazil, Singapore, and Taiwan.  Korea and Turkey, in contrast, are hardly cheap on a dividend basis, and have not performed well in terms of the amount of dividend growth that they have generated.
  • Russia and China are, at a minimum, weird.  Both have exhibited extremely volatile markets over the last 18 years, with dividend fluctuations tracking the price fluctuations.  Russia, in particular, has generated no real dividend growth since 1996, and also no real dividend growth since 2003, after the default.  Both markets are presently very cheap, but they won’t make for good investments unless they cease to be the value traps that they’ve proven themselves to be in the past.
  • For styles, counterintuitively, the “value” sector of the U.S. market is expensive, and the “quality” sector is cheap.  So if you’re looking to invest in the U.S., you’re best bet is probably to buy high quality multinationals with strong competitive moats–the kind that Jeremy Grantham of GMO frequently touts.
  • For small and mid caps, the US and the UK, though unquestionably expensive, may not be as egregiously expensive as some seem to think.  Their dividends are basically at the historical average since 1996.  Singaporean and Canadian small and mid caps, in contrast, appear very attractively valued.
  • The Nordic countries–Sweden, Norway, Finland, and Denmark–have produced fantastic dividend growth for their shareholders, on pretty much all measured horizons.  With the exception of Denmark, these countries are all reasonably valued at present–but, of course, we can’t be sure if the stellar dividend growth will continue.
  • It’s difficult to find a connection between macroeconomic aggregates–population growth, GDP per capita growth, and GDP growth–and returns.  If anything, we can probably say that healthy, moderate, positive population growth, with moderate GDP per capita and GDP growth, are better for returns than the opposite.
Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

How Money and Banking Work On a Gold Standard

wjbMost financial professionals–to include those that work in the banking industry–do not have a clear understanding of how money and banking work on a gold standard.  This is hardly something to be ashamed of–most mathematicians don’t have a clear understanding of how an abacus works, and yet no one would consider that a negative mark.  There’s no responsibility to understand the inner workings of the antiquated, obsolete technologies of one’s field.

With that said, there’s a lot of value to be gained from learning how money and banking work on a gold standard–both the “free banking” and the “central banking” varieties. There’s also value in learning how the U.S. monetary system got from where it was in the 17th century to where it is today.  The field of money and banking is filled with concepts that are difficult to intuitively grasp–concepts like reserves, deposits, base money, money multiplication, and so on.  In a study of the gold standard and its history, each of these concepts is made concrete–you can readily point to the piece of paper, or the block of metal, that each concept refers to.  Ironically, the intricacies of the modern monetary system are easier to understand once one has learned how the equivalent concepts work on a gold standard.

In this piece, I’m going to carefully and rigorously explain how different types of gold standards work.  I’m going to begin with a discussion of how bartering gives rise to precious metals as a privileged asset class.  I’m then going to discuss money supply expansion on a gold standard–what the actual mechanism is.  After that, I’m going to discuss David Hume’s famous price-specie flow mechanism, which maintains a balance of payments between regions and nations that use a gold standard.  I’m then going to discuss the underlying mechanics of fractional-reserve free banking, to include a discussion of how it evolved.  After that, I’m going to explain how the market settles on an interest rate in a fractional-reserve free banking system.  I’m then going to explain how fractional-reserve central banking works on a gold standard, to include a discussion of the use of reserve requirements and base money supply expansion and contraction as a means of controlling bank funding costs and aggregate bank lending.  Finally, I’m going to refute two misconceptions about the Gold Standard–first, that it caused the Great Depression (it categorically did not), and second, that its reign in the U.S. ended in 1971 (not true–its reign ended in the Spring of 1933).

Bartering, Precious Metals, and Mints

We begin with a simple barter economy in which individuals exchange goods and services directly, without using money.  In a barter economy, certain commodities will come to be sought after not only because they satisfy the wants and needs of their owners, but also because they are durable and easy to exchange.  Such commodities will provide their owners with a means through which to store wealth for consumption at a later date, by trading.  On this measure, metals–specifically, precious metals–will score very high, and will be conferred with a trading value that substantially exceeds their direct and immediate usefulness in everyday life.  The Father of Economics himself explains,

“In all countries, however, men seem at last to have been determined by irresistible reasons to give the preference, for this employment, to metals above every other commodity. Metals can not only be kept with as little loss as any other commodity, scarce any thing being less perishable than they are, but they can likewise, without any loss, be divided into any number of parts, as by fusion those parts can easily be reunited again; a quality which no other equally durable commodities possess, and which more than any other quality renders them fit to be the instruments of commerce and circulation.  Different metals have been made use of by different nations for this purpose.  Iron was the common instrument of commerce among the antient Spartans; copper among the antient Romans; and gold and silver among all rich and commercial nations.” (Adam Smith, The Wealth of Nations, 1776–Book I, Chapter IV, Section 4 – 5)

Crucially, the trading value of precious metals will end up being grounded in a self-fulfilling belief and confidence in that value, learned culturally and through a process of behavioral reinforcement. Individuals will come to expect that others will accept precious metals in exchange for real goods and services in the future, therefore they will accept precious metals in exchange for real goods and services now, holding the practice in place and validating the prior belief and confidence in it. Every form of money gains its power in this way–through the self-fulfilling belief and confidence that it will be accepted as such.

Now, in economic systems where precious metals are the predominant form of money, two practical problems emerge: measurement and fraud.  It is inconvenient for individuals to have to measure the precise amount of precious metal they are trading every time they trade. Furthermore, what is presented as a precious metal may not be fully so–impurities may have been inserted to create the illusion that more is there than actually is.

The inevitable solution to these problems comes in the form of “Mints.”  Mints are credible entities that use stamping and engraving to vouch for the weight and purity of units of precious metal.  The Father of Economics again,

“People must always have been liable to the grossest frauds and impositions, and instead of a pound weight of pure silver, or pure copper, might receive in exchange for their goods, an adulterated composition of the coarsest and cheapest materials, which had, however, in their outward appearance, been made to resemble those metals. To prevent such abuses, to facilitate exchanges, and thereby to encourage all sorts of industry and commerce, it has been found necessary, in all countries that have made any considerable advances towards improvement, to affix a publick stamp upon certain quantities of such particular metals, as were in those countries commonly made use of to purchase goods.  Hence the origin of coined money, and of those publick offices called mints.” (Adam Smith, The Wealth of Nations, 1776–Book I, Chapter IV, Section 7, emphasis added)

Mining and Money Supply Growth

In a healthy, progressing economy, where learning, technological innovation and population growth drive continual increases in output capacity–increases in the amount of wanted “stuff” that the economy is able to produce each year–the supply of money also needs to increase.  If it doesn’t increase, the result will either be deflation or economic stagnation (for a clear explanation of the reasons why, click here).  Both of these options are undesirable.

Fortunately, in a metal-based monetary system, there is a natural mechanism through which the money supply can expand: mining.  Miners extract metals from the ground.  They take the metals to mints to have them forged into coins.  They then spend the coins into the economy, increasing the money supply.

The problem, of course, is that there is no assurance that the output of the mining industry, which sets the growth of the money supply, will proceed on a course commensurate with growth in the output capacity of the real economy–its ability to to produce the real things that people want and need.  If the mining industry produces more new money than can be absorbed by growth in the economy’s output capacity, the result will be inflation, an increase in the price of everything relative to money.  This is precisely what happened in Europe in the years after the Spanish and Portuguese discovered and mined The New World.  They brought its ample supply of precious metal back home to coin and spend–but the economy’s output capacity was no different than before, and could not meet the demands of the increased spending.  In contrast, if the mining industry does not produce enough new money to keep up with growth in the economy’s output capacity, the result will be deflation–what Europe frequently saw in the periods before the discovery of The New World.

In a metal-based monetary system, there is a natural feedback that helps keep the mining industry from producing too much or too little new money.  If the industry produces too much new money, the ensuing inflation of prices and wages will make mining less profitable in real terms, and discourage further investments in mining.  If the mining industry does not produce enough new money, the deflation of prices and wages will make mining more profitable in real terms, and encourage further investments in mining.  To the extent that a metallic monetary system is closed to external flows, this feedback is the only feedback present to stabilize business cycles. Obviously, it can’t act quickly enough or with enough power to keep prices stable, which is why large cycles of inflation and deflation frequently occurred prior to the development and refinement of modern central banking.

If it seems crazy to think that humanity could have survived under such a primitive and constricted monetary arrangement–an arrangement where a limited, unsupervised, unmanaged supply of a physical object forms the basis of all major commerce–remember that the economies of the past were not as specialized and dependent upon money and trade as they are today.  Trading in money would have been something that the wealthy and royal class would ever have to worry about.  The rest would meet the basic needs of life–food, water, shelter–by producing it themselves, or by working for those with means and receiving it directly in compensation, as a serf in a feudal kingdom might do.

The Price-Specie Flow Mechanism

What is unique about a metal-based monetary system is that money from any one country or geographic region can easily be used in any other, without a need for conversion.  All that is necessary is that individuals trust that the money consists of the materials that it claims to consist of, as signified in its stamp or engravement.  Then, it can be traded just as its underlying materials would be traded.  After all, it is those materials–its being those materials is the basis for its being worth something.

In early British America, Spanish silver dollars, obtained from trade with the West Indies, were a popular form of money, owing to the tight supply of British currency in the colonies.  To use the Spanish dollars in commerce, there was no need to convert them into anything else; they were already 387 grains of pure silver, their content confirmed as such by the mark of the Spanish empire.


The prospect of simple, undistorted international flows under a metal-based monetary system gives way to an important feedback that enforces a balance of payments between different regions and nations and that acts to stabilize business cycles. This feedback is called the “price-specie flow mechanism”, introduced by the philosopher David Hume, who explained it in the following passage:

“Suppose four-fifths of all the money in Great Britain to be annihilated in one night, and the nation reduced to the same condition, with regard to specie, as in the reigns of the Harrys and Edwards.  What would be the consequence?  Must not the price of all labour and commodities sink in proportion, and every thing be sold as cheap as they were in those ages?  What nation could then dispute with us in any foreign market, or pretend to navigate or to sell manufactures at the same price, which to us would afford sufficient profit? In how little time, therefore, must this bring back the money which we had lost, and raise us to the level of all the neighbouring nations? Where, after we have arrived, we immediately lose the advantage of the cheapness of labour and commodities; and the farther flowing in of money is stopped by our fulness and repletion.”  (David Hume, Political Discourses, 1752–Part II, Essay V, Section 9)

“Again, suppose, that all the money of Great Britain were multiplied fivefold in a night, must not the contrary effect follow? Must not all labour and commodities rise to such an exorbitant height, that no neighbouring nations could afford to buy from us; while their commodities, on the other hand, became comparatively so cheap, that, in spite of all the laws which could be formed, they would be run in upon us, and our money flow out; till we fall to a level with foreigners, and lose that great superiority of riches, which had laid us under such disadvantages?” (David Hume, Political Discourses, 1752–Part II, Essay V, Section 10)

For a relevant example of the price-specie flow mechanism in action, suppose that Europe is on a primitive gold standard, and that Germans make lots of stuff that Greeks end up purchasing, but Greeks don’t make any stuff that Germans end up purchasing.  Money–in this case, gold–will flow from Greece to Germany.  The Greeks will literally be spending down their money supply, removing liquidity and purchasing power from their own economy.  The liquidity and purchasing power will be sent to Germany, where it will circulate as income and fuel a German economic boom. The ensuing deflation of prices and wages in Greece, and the ensuing inflation of prices and wages in Germany, will prevent Greeks from purchasing goods and services from Germany, and will make it more attractive for Germans to purchase goods and services from Greece (or to invest in Greece).  Money–again, gold–will therefore be pulled back in the other direction, from Germany back to Greece, moving the system towards a balanced equilibrium.

It is only with fiat money, money that can be created by a government at will, that this mechanism can be circumvented.  The Chinese monetary authority, for example, can issue new Renminbi and use them to purchase U.S. dollars, exerting artificial downward pressure on the Renminbi relative to the U.S. dollar, and preserving a large trade imbalance between the two nations.  Metal, in contrast, cannot be created at will, and so there is no way to circumvent the mechanism under a strict metallic monetary system.

Paradigm Shift: The Development of Fractional-Reserve Free Banking

Up to now, all we have for money are precious metals–coins and bars of gold and silver. There are promises, there is borrowing, there is debt–but these are not redeemable on demand for any defined amount.  Anyone who accepts them as payment must accept illiquidity or the risk of mark-to-market losses if the holder chooses to trade them.

The paradigm shift that formally connected borrowing and debt with securities redeemable on demand for a defined amount occurred with the development of free banking. Historically, savers sought to keep their supplies of gold and silver–in both coin and bar form–in safe deposits maintained by goldsmiths.  The goldsmiths would charge a fee for the deposit, and would issue a document–a banknote–redeemable for a certain amount of gold and silver on the holder’s request.  Given that the goldsmiths generally had reputations as honest dealers, the banknotes would trade in the market as if they were the very gold and silver that they could be redeemed for.

Eventually, the goldsmiths realized that not everyone came to redeem their gold and silver deposits at the same time.  The gold and silver deposits coming in (i.e., the banknotes being created) would generally balance out with the gold and silver deposits leaving (i.e., the banknotes being redeemed). This balancing out of incoming and outgoing deposit flows allowed the goldsmiths to issue more banknotes than they were storing in actual gold and silver.  They could print and loan out banknotes in excess of the gold and silver deposits that they actually had on hand, and receive interest in compensation.  Thus was born the phenomenon of fractional-reserve banking.

Initially, the banking was “free” banking, meaning that there was no government involvement other than to enforce contracts.  The banknotes of each bank were accepted as payment based on the reputation and credibility of the bank.  Each bank could issue whatever quantity of banknotes, over and above its actual holdings of gold and silver, that it felt comfortable issuing.  But if the demand for redemption in gold and silver exceeded the supply on hand, that was the problem of the banks and the depositors–not the problem of the government or the taxpayer.

The U.S. operated under a free banking system from the initial Coinage Act of 1792 all the way until the Civil War.  The currency was defined in terms of gold, silver and copper as follows:


Citizens would send the requisite amount of precious metal to the U.S. mint and have it coined for a small fee.  They would then store the coins–and whatever other form of precious metal they owned–in banks, and receive banknotes in exchange.  Individual banks issued different individual banknotes, with different designs.



In lieu of banknotes, bank customers also accepted simple deposits, against which they could write cheques.  The difference between a cheque and a banknote is that a banknote represents a promise to pay to the holder, on demand.  A cheque represents an order to a bank to pay a specific person, whether or not she is currently holding the cheque.  So, for example, if I have a deposit account with Pittsfield bank in Massachussets, and I write a cheque to someone, that person has to deposit the cheque in order to use it as currency. He can’t trade it with others directly as money, because it was written to him from me.  He has to take the cheque to his bank–say, Windham bank–to cash it (or deposit it).  In that case, coins (gold and silver) will be transferred from Pittsfield to Windham.  In contrast, if I give the person a banknote from Pittsfield as payment, he can use it directly in the market–provided, of course, that Pittsfield has a sound reputation as a bank.

The issuance of banknotes, and their widespread acceptance as a working substitute for the actual metallic money that they were redeemable for, created a mechanism through which the money supply–the supply of legal tender that had to be accepted to pay debts public and private–could expand in accordance with the economy’s needs.  Granted, prior to the advent of fractional-reserve banking, it was possible to trade debt securities and debt contracts in lieu of actual gold and silver–but these securities and contracts were not redeemable on demand.  The recipient had to accept a loss of liquidity and optionality in order to take them as payment.  A banknote, in contrast, is redeemable on demand, by anyone who holds it, therefore it is operationally equivalent to the legal money–the coined precious metal–that backs it.

true gold standard is a gold standard built on fractional-reserve free banking.  The government defines the value of the currency in terms of precious metals, and then leaves banks in the private sector to do as they please–to issue whatever quantity of banknotes they want to issue, and to pay the price in bankruptcy if they behave in ways that create redemption demand in excess of what they can actually redeem.  There is no government intervention, no regulatory imposition, no reserve requirement, no capital ratio–just the supervision of market participants themselves, who have to do their homework.

The Unstable Mechanics of Fractional-Reserve Free Banking

The following chart show how gold-based fractional-reserve free banking functions in practice.


We begin before there has been any lending.  Banks #1 and #2 have each received $100 in gold and have each issued $100 in banknotes to customers in exchange for it.  Bank #3 has received $100 in gold and has issued checkable deposit accounts to customers with $100 recorded in them.  We can define the M2 money supply to be the sum of banknotes, checking accounts, and gold and silver held privately, outside of banks. The M2 money supply for the system is then $300.  We can define the base money supply to be the total supply of gold in the system.  The base money supply is then $300.  The base money supply equals the M2 money supply because there hasn’t yet been any lending. Lending is what will cause the M2 money supply to grow in excess of the base money supply.

Let’s assume that a customer of Bank #3 writes a check for $50 to a person who deposits the check at Bank #1.  At the end of the day, when the banks settle their payments, Bank #3 will send $50 worth of gold to Bank #1, and will reduce the customer’s deposit account by $50.


Here, we’ve broken the banking system out into assets and liabilities.  The assets of the banking system are $300, all in the form of gold.  The liabilities are also $300, in the form of banknotes and deposit accounts, both of which can be redeemed for gold on demand. The assets and the liabilities are equal because the banks aren’t carrying any of their own capital (and that’s fine–they don’t need to, there’s no regulator to impose a capital requirement in a free-banking system).

Now,  let’s assume that Bank #3 issues new loans to customers worth $18,000.


We’ll assume that half of the loans are issued to the borrowers in the form of banknotes, and half are issued in the form of deposits, held at Bank #3.  Crucially, Bank #3 has printed this new money out of thin air.  That’s what banks do when they lend to the non-financial sector or purchase assets from the non-financial sector–they print new money.  All banks do this, not just central banks.  The money can be used like any other money in the system, provided that people trust the bank.

Taking a closer look at Bank #3, it now has $18,050 of assets, and $18,050 of liabilities. The assets are composed of $50 worth of base money (gold), and $18,000 worth of loans, which are obligations on the part of the customers to repay what was borrowed, with interest.  The liabilities are $9,000 worth of banknotes, and $9,050 worth of deposit accounts.

Now, we can define the term “reserve” to mean any money–in this case, gold, because we’re on a gold standard–that the banking system is holding that can be used to meet redemption requests.  Right now, the total quantity of reserves in the system equals the total quantity of base money, because all of the base money–all of the gold–is being held inside the banking system.  All of the gold is in the hands of the banks and available to fund redemptions.  Nobody has taken any gold out; there are no private holders.

If a customer writes a check out of his account in Bank #3, and that check gets deposited in Bank #1, Bank #3 will transfer reserves–in this case, gold–to Bank #1.  Similarly, when a customer redeems one of Bank #3′s banknotes, reserves–again, gold–will be moved from Bank #3 into the hands of the customer.

So let’s assume that a customer of Bank #3 writes a $75 cheque that gets cashed at Bank #1. Alternatively, let’s assume that a customer tries to redeem $75 worth of Bank #3′s banknotes for gold. What will happen? Notice that Bank #3 doesn’t have $75 worth of reserves–gold on hand–to transfer to Bank #1, or to give to the customer that wants to take the gold out.  So it will have to default on its liabilities.  It promised to redeem the banknote in gold on demand, and it can’t.

We’ve arrived at the obvious problem with free banking on a gold standard.  There’s no reserve requirement, no requirement for a capital cushion, and no lender of last resort. The system is therefore highly unstable, and becomes all the more unstable as the quantity of lending (which is determined by the investment and spending appetite of borrowers, and by the risk appetite of lenders) grows relative to the quantity of gold (which is determined by the business activities of gold miners, independently of what the economy is doing elsewhere).

If even a small fear of illiquidity or insolvency at a bank develops, it can snowball into a full-on bank run–of which there were far too many in the free-banking era.  Granted, to help each other meet redemption requests, banks can issue short-term overnight gold loans to each other.  But this isn’t enough to create stability; in times of trouble, when such lending is most needed, it will tend to disappear.

In addition to being unstable, free banking systems are also undesirable pro-cyclical.  To understand the pro-cyclicality, we have to step back and examine how interest rates work in a free banking system.

Interest Rates in a Free Banking System

In a modern system, the central bank controls short-term low-risk interest rates–the rates at which banks borrow from each other, and at which they borrow from their customers (who hold deposits with them).  That interest rate is the funding cost of the banking system.  Its expected future trajectory plays a crucial role in determining interest rates across all other parts of the yield curve.

But we don’t have a central bank right now.  We just have gold, and the market.  How, then, will the system settle on an interest rate?  The equilibrium value of any market interest rate will be a function of (1) the supply of funds that lenders wish to lend out and (2) the demand on the part of borrowers to borrow funds.  If there is a high demand for borrowing, and a low supply of funds that lenders want to lend out, the market will tend to equilibriate at a high interest rate.  If there is a low demand for borrowing, and a large supply of funds that lenders want to lend out, the market will tend to equilibriate at a low interest rate.

In good economic times, banks are going to feel confident, comfortable–willing to lend excess funds to each other.  Customers will feel similar–willing to trust that the gold that backs their deposit accounts and banknotes is safe and sound inside bank vaults.  They will not demand much in the form of interest to store their money, provided that they will be able to retain access to its use (and they will–there is no loss of liquidity when gold is deposited in a checking account or exchanged for a banknote–the money can still be spent now).  The interest rate at which banks borrow from each other and from their customers will therefore be low.  But we don’t want it to be low.  We want it to be high, because environments of confidence and comfort are the kinds of environments that produce excessive, imprudent, unproductive lending and eventual inflation.

The only reason that banks would pay a high rate to borrow funds (gold) from each other, or from their customers, would be if they were facing redemption requests, or if they were uncomfortable with the amount of gold that they had on hand to meet redemption requests.  Again, recall that banks don’t lend out their reserves–the actual base money, the gold.  Those reserves are simply there to meet customer requests to redeem the banknotes and deposits that they create out of thin air.

But if times are good, banks aren’t going to be afraid of redemption requests.  There lack of fear will be justified, as there aren’t going to be very many panicky customers trying to redeem.  This setup will make it even easier for them to lend.  In theory, as long as no one tries to redeem, they can offer an infinite supply of loans to the market, with each loan representing incremental profit. That’s obviously not what we want here.  In good times, we want tighter monetary conditions, a tighter supply of loans to be taken out, in order to discourage excessive, imprudent, unproductive lending, and to mitigate an eventual inflation.

In bad times, the reverse will prove true.  Banks won’t lend to each other, even when they have good collateral to post, and customers won’t be comfortable holding their savings in banks.  They will want to take their savings out–which means taking out gold, and pushing the system towards default.  Without a lender of last resort, the system will be at grave risk of seizing up, especially as rumors and stories of failed redemptions spread.  Lest there be any confusion, this happened many times in the 19th century.  During periods of economic contraction, the system was an absolute disaster, which is the reason why the country moved away from free banking, and towards a system of central banking.

Now, to be fair, free banking does offer a natural antidote to inflation.  If excessive lending brings about inflation, market participants will redeem their gold and invest and spend it abroad (purchase cheap imports), in accordance with the price-specie flow mechanism.  This will remove gold from the banking system, and raise the cost of funding for banks–assuming, of course, that banks feel a need to maintain a healthy supply of reserves.  But again, in good times, there is nothing to say that banks will feel such a need.  Inflation, and the ensuing migration of gold out of the system, is not likely to stop excessive lending in time to prevent actual problems.  To the contrary, the migration of gold out of the system is likely to be a factor that only forces a reaction after it’s too late, after the economy is already in recession, when further risk-aversion and monetary tightness on the part of banks will be counterproductive.

Central Banking on a Gold Standard

With the passage of the Federal Reserve Act in 1913, the U.S. financial system finalized its transition from a free banking system to a central banking system, using a gold standard. The following chart, which begins where the previous chart left off, gives a rough illustration of the way the system worked:


The system worked as follows. Private citizens would deposit their gold with private banks and receive credited deposit accounts in exchange.  The private banks would then deposit the gold with the central bank.  In exchange for the gold, the private banks would receive banknotes–in this case, Federal Reserve banknotes, “greenbacks.”  As before, in lieu of receiving and holding actual paper banknotes from the central bank, the banks could receive credits on their deposit accounts with the central bank.  To keep things simple and intuitive from here forward, I’m going to treat these deposit accounts as if they were simple paper banknotes held by the banks in vaults–they just as easily could be.

Instead of depositing gold with private banks, citizens could also deposit their gold directly with the central bank, and receive banknotes directly from the central bank in exchange. But the banknotes would eventually end up on deposit at private banks, where people would store money.  So the system would arrive at the same outcome.

Notice that in this model, the central bank is fulfilling the same role that private banks fulfilled on the free banking model.  It is issuing banknotes that can be redeemed on demand in gold, against a reserve supply of gold to meet potential redemption requests. Crucially, it has the power to issue an amount in banknotes that is greater than the amount that it is actually carrying in gold.  It therefore has the power to act as a genuine fractional-reserve bank.  That power is what allows it to expand the monetary base, control short-term interest rates, and function as a lender of  last resort.  As long as customers do not seek to redeem their banknotes for gold in an amount that exceeds the amount of gold that the central bank actually has on hand, then the central bank can issue, out of thin air, as many banknotes as it wants.

This is actually a common misconception–that the central bank on a gold standard is necessarily constricted by the supply of physical gold.  Not true.  What constricts the central bank on a gold standard is (1) the amount of confidence that the public has in the central bank and (2) the severity of trade imbalances.  If the public does not panic and try to redeem gold, and if the price-specie flow mechanism does not force a gold outflow in response to a substantial trade imbalance, then a gold standard will impose no constraint on the central bank at all.

Now, what is critically different from the free banking model is that the central bank imposes a reserve requirement on the private banks.  They have to hold a certain quantity of banknotes as reserves in their vaults equal to a percentage of their total deposit liabilities.

You might think that the purpose of this requirement is to ensure that the banks maintain sufficient liquidity to meet possible redemptions–in our Americanized example, customers going to the bank and asking to redeem their deposits in greenbacks, or writing cheques on their deposit accounts which then get cashed at other banks, forcing a transfer of greenbacks from the bank in question to those other banks.  But the system now has a lender of last resort–the Fed.  That lender is there to print and loan to banks any greenbacks that are needed to meet redemption requests.  As long as the Fed is willing to conduct this lending, there is no need for the banks to hold any reserves at all.

The real reason why the reserve requirement exists is to allow the central bank to control the proliferation of bank lending, and to therefore maintain price stability.  Notice that if there were no reserve requirement, it would be up to the banks to decide how much “funding” they needed.  As long as their incoming deposits consistently offset their outgoing deposits (those redeemed for greenbacks or cashed via cheque in other banks), they could theoretically loan out an infinite supply of new deposits (that is, print an infinite supply of new money), against a very small supply of banknotes on reserve in vault, or even against no banknote reserves at all.

The central bank controls the proliferation of bank lending by controlling its cost–by making it cheap or expensive.  It controls the cost of bank lending by setting a reserve requirement, and then using open market operations–asset purchases and sales–to control the total amount of banknotes in the banking system.  If we assume that all but a few banknotes will end up deposited in banks, then the total amount of banknotes in the banking system just is the total quantity of funds available for banks to use to meet their reserve requirements.

When the central bank makes purchases from the private sector, it takes assets out of the system, and puts banknotes in.  Prior to the purchases, the assets were being held directly by the individuals that owned them–with no involvement from banks (unless banks were the owners, which we’ll assume they weren’t).  But the individuals have now sold the assets to the central bank, and have received banknotes instead.  They’re going to take those banknotes to banks and deposit them.  The banknotes will therefore become bank funds, stored in a vault, which can be used to meet reserve requirements.  You can see, then, how asset purchases end up putting funds into the banking system.  Asset sales take them out, through a process that is exactly the reverse.

If you add up all of the bank loans in the economy, you will get some number–in our earlier example, that number was $18,000.  If there is a 10 to 1 reserve ratio requirement, then you can know that banks, collectively, will need to hold $1,800–$18,000 divided by 10–in banknote reserves in their vaults to be compliant with the reserve requirement.

Suppose that the central bank purchases assets so that the total quantity of banknotes in the system ends up being something like $3,600.  In aggregate, banks will end up with significantly more banknotes than they need to meet the $1,800 reserve requirement.  Of course, some banks, like Bank #3 in our earlier example, may end up being right up against the reserve requirement, or even in violation of it.  But if that’s true, then other banks will necessarily have an excess supply of banknotes on reserve that can be lent out.  In aggregate, the demand for banknotes–reserves, monetary base, all the same–will be well-quenched, and the rate at which banks lend to each other will be very low–in this case, close to zero.

Now, suppose instead that the central bank sells assets so that the total quantity of banknotes in the system ends up being something like $1,802.  In aggregate, the banking system will have $2 worth of excess funds that won’t need to be held on reserve in vault, and that can be lent out to other banks.  It goes without saying that the cost of borrowing that $2 will be very high, and therefore the probability that the banking system will add another $20 to its aggregate supply of loans (what $2 of extra reserves allows on a 10 to 1 reserve ratio) will be very low.  By shrinking the supply of banknotes down to $1,802, just above what is necessary for the aggregate banking system, with its current quantity of outstanding loans, to be compliant with the reserve requirement, the central bank has successfully discouraged further bank lending.  If the central bank wants, it can even force banks to reduce their outstanding loans.  Just sell assets so that the quantity of reserves falls below $1,800–then, to be compliant with the reserve requirement, banks in aggregate will need to call loans in or sell them to the non-financial sector.

Two Misconceptions About the Gold Standard

Before concluding I want to clear up two misconceptions about the Gold Standard.  The first misconception relates to the idea that the gold standard somehow caused or exacerbated the Great Depression.  This simply is not true.  What caused and exacerbated the Great Depression, from the Panic of 1930 until FDR’s banking holiday in the spring of 1933, was the unwillingness on the part of the Federal Reserve to lend to solvent but illiquid banks.  The Fed of that era had come to embrace a perverse, moralistic belief that the underlying economy was somehow broken, damaged by promiscuous malinvestment associated with the prior expansion, and that it needed to be allowed to take the painful medicine that it had coming–even if this entailed massive bankruptcies and bank failures.  The “cleansing” would be good in the long run, or so they thought.

The Fed’s refusal to lend to banks facing runs had nothing to do with any constraint associated with the gold standard.  Indeed, the Fed at the time was flush with gold–it held a gold reserve quantity equal to a near-record 80% of the outstanding base money that it had created.  For context, in 1896, the Treasury (which was then in control, prior to the creation of the Fed) let its gold reserves fall to 13% of its outstanding supply of base money.

Not only did the Fed have adequate reserves against which to lend to banks, it potentially could have conducted a large quantitative easing program–while on a gold standard.  The risk, of course, would have been that an economically illiterate public might have tried to protect itself by redeeming gold en masse–then, the Fed would have had to stop, to avoid a contagious process of redemption and a potential default.  This risk–that an economically illiterate public might panic and seek to redeem gold in numbers that exceed what the central bank has on hand–is the only risk that a central bank ever really faces on a gold standard.  Either way, it wouldn’t have made too much of a difference, as the efficacy of QE is highly exaggerated.  An economy in the doldrums can recover without it.  But no economy can recover as long as its banking system is in an acute liquidity crisis.  The Fed had ample power to resolve the liquidity crisis that the financial system was facing at the time, and a clear mandate in its charter to resolve it–but it chose not to, for reasons that had nothing to do with gold.

The second misconception pertains to the idea that US financial system was somehow on a gold standard after 1933.  It was not.  The gold standard ended in the Spring of 1933, when FDR issued executive order 6102.  This order made it illegal for individuals within the continental United States to own gold.  If gold can’t be legally owned, then it can’t be legally redeemed.  If it can’t be legally redeemed, then it can’t constrain the central bank.


The gold standard that was in place from the mid 1930s until 1971 was figurative and ceremonial in nature.  The Fed’s gold, which “backed” the dollar, could not be redeemed by the public, therefore the backing had no bite.  It did not effectively constrain the Fed or the money supply.  That much should obvious–if a gold standard had existed in the 1940s, and had constrained the Fed’s actions, the country would not have been able to finance the massive, record-breaking government deficits of World War 2.  Those deficits were financed almost entirely by Fed money creation.

Now, to be clear, on a fiat monetary system, the market retains the ability to put the central bank “in check.”  Instead of redeeming money directly from the central bank in gold, market participants can “redeem” money by refusing to hold it, choosing instead to hold assets that they think will retain value–land, durables, precious metals, foreign currencies, foreign securities, foreign real estate, etc.  If this happens en masse, and if there is a concomitant monetary expansion taking place alongside it, the result will be an uncontrolled inflation.  The probability that such a rejection will occur is obviously much less on a fiat system, where the option of gold redemption isn’t there to tempt things.  But the theoretical power to reject the money as money, which is what the idea of gold redemption formalizes, is still there.


Contrary to the usual assumptions, the fiat monetary system that we currently use is not that different from the gold-based system in use in the early 20th century.  All one has to do to get from such a system, to our current system, is (1) make everything electronic, and (2) delete the gold.  Just get rid of it, let the central bank create as much base money as it wants, against nothing, or against gold that, by law, cannot be redeemed (the setup from 1933 to 1971).

The reason not to use monetary systems based on gold is that they are obsolete and unnecessary, with no real benefits over fiat systems, but with many inconveniences and disadvantages. In a fiat system, the central bank can create base money in whatever amount would be economically appropriate to create.  But on a gold-based system, the central bank is forced to create whatever amount of base money the mining industry can mine, and to destroy whatever amount of base money a panicky public wants destroyed.  There’s no reason to accept a system that imposes those constraints, even if they aren’t much of a threat in the majority of economic environments.  If the goal is to constrain the central bank, then constrain it directly, with laws.  Put a legal limit on how much money it can issue, or on what it can purchase.  Alternatively, if you are a developing country that does not enjoy the confidence of the market, peg your currency to the currency of a country that does enjoy that confidence.  There is no need for gold.

Posted in Uncategorized | Leave a comment