Diversification, Adaptation, and Stock Market Valuation

Looking back at asset class performance over the course of market history, we notice a hierarchy of excess returns.  Small caps generated excess returns over broad equities, which generated excess returns over corporate bonds, which generated excess returns over treasury bonds, which generated excess returns over treasury bills (cash), and so on.  This hierarchy is illustrated in the chart and table below, which show cumulative returns and performance metrics for the above asset classes from January 1926 to March 2017 (source: Ibbotson, CRSP).

(Note: To ensure a fair and accurate comparison between equities and fixed income asset classes, we express returns and drawdowns in real, inflation-adjusted terms.  We calculate volatilities and Sharpe Ratios using real absolute monthly returns, rather than nominal monthly returns over treasury bills.)



The observed hierarchy represents a puzzle for the efficient market hypothesis.  If markets are efficient, why do some asset classes end up being priced to deliver such large excess returns over others?  An efficient market is not supposed to allow investors to generate outsized returns by doing easy things.  Yet, historically, the market allowed investors to earn an extra 4% simply by choosing equities over long-term bonds, and an extra 2% simply by choosing small caps inside the equity space.  What was the rationale for that?

The usual answer given is risk.  Different types of assets expose investors to different levels of risk.  Risk requires compensation, which is paid in the form of a higher return.  The additional 4% that equity investors earned over bond investors did not come free, but represented payment for the increased risk that equity investing entails.  Likewise, the 2% bonus that small cap investors earned over the broad market was compensation for the greater risk associated with small companies.

A better answer, in my view, is that investors didn’t know the future.  They didn’t know that equity earnings and dividends were going to grow at the pace that they did.  They didn’t know that small cap earnings and dividends were going to grow at an even faster pace.  They didn’t know that inflation was going to have the detrimental long-term effects on real bond returns that it had.  And so on.  Amid this lack of future knowledge, they ended up pricing equities to outperform bonds by 4%, and small caps to outperform the broad market by 2%.  Will we see a similar outcome going forward?  Maybe.  But probably not.

Let’s put aside the question of whether differences in “risk”, whatever that term is used to mean, can actually justify the differences in excess returns seen in the above table.  In what follows, I’m going to argue that if they can, then as markets develop and adapt over time, those excess returns should fall.  Risk assets should become more expensive, and the cost of capital paid by risk issuers should come down.

The argument is admittedly trivial.  I’m effectively saying that improvements in the way a market functions should lead to reductions in the costs that those who use it–those who seek capital–should have to pay.  Who would disagree?  Sustainable reduction in issuer cost is precisely what “progress” in a market is taken to mean.  Unfortunately, when we flip the point around, and say that the universe of risk assets should grow more expensive in response to improvements, people get concerned, even though the exact same thing is being said.

To be clear, the argument is normative, not descriptive.  It’s an argument about what should happen, given a certain assumption about the justification for excess returns.  It’s not an argument about what actually has happened, or about what actually will happen.  As a factual matter, on average, the universe of risk assets has become more expensive over time, and implied future returns have come down.  The considerations to be discussed in this piece may or may not be responsible for that change.

We tend to use the word “risk” loosely.  It needs a precise definition.  In the current context, let “risk” refer to any exposure to an unattractive or unwanted possibility.  To the extent that such an exposure can be avoided, it warrants compensation.  Rational investors will demand compensation for it.  That compensation will typically come in the form of a return–specifically, an excess return over alternatives that successfully avoid it, i.e., “risk-free” alternatives.

We can arbitrarily separate asset risk into three different types: price risk, inflation risk, and fundamental risk.

Price Risk and Inflation Risk

Suppose that there are two types of assets in the asset universe.

(1) Zero Coupon 10 Yr Government Bond, Par Value $100.

(2) Cash Deposited at an Insured Bank — expected long-term return, 2%.

The question: What is fair value for the government bond?

The proper way to answer the question is to identify all of the differences between the government bond and the cash, and to then settle on a rate of return (and therefore a price) that fairly compensates for them, in total.

The primary difference between the government bond and the cash is that the cash is liquid.  You can use it to buy things, or to take advantage of better investment opportunities that might emerge.  Of course, you can do the same with the government bond, but you can’t do it directly.  You have to sell the bond to someone else.  What will its price in the market be?  How will its price behave over time?  You don’t know.  When you go to actually sell it, the price could end up being lower than the price you paid for it, in which case accessing your money will require you to accept a loss.  We call exposure to that possibility price risk.  The bond contains it, cash does not.  To compensate, the bond should offer an excess return over cash, which is the “price-risk-free” alternative.

To fully dismiss the price risk in a government bond investment, you would have to assume total illiquidity in it.  Total illiquidity is an extreme cost that dramatically increases the excess return necessary to draw an investor in.  That said, price risk is a threat to more than just your liquidity.  It’s a threat to your peace of mind, to your measured performance as an investor or manager, and to your ability to remain in leveraged trades.  And so even if you have no reason to want liquid access to your money, no reason to care about illiquidity, the risk that the price of an investment might fall will still warrants some compensation.

A second category of risk is inflation risk.  Inflation risk is exposure to the possibility that the rate of inflation might unexpectedly increase, reducing the real value of a security’s future payouts.  The cash is offering payouts tied to the short-term rate, which (typically) gets adjusted in response to changes in inflation.  It therefore carries a measure of protection from that risk.  The bond, in contrast, is offering a fixed payout 10 years from now, and is fully exposed to the risk.  To compensate for the difference, the bond should offer an excess return over cash.

Returning to the scenario, let’s assume that you assess all of the differences between the bond and cash, to include the bond’s price risk and inflation risk, and conclude that a 2% excess return in the bond is warranted.  Your estimate of fair value, then, will be $67.55, which equates to a 4% yield-to-maturity (YTM).

Fundamental Risk: An Introduction to Lotto Shares

A security is a stream of cash flows and payouts.  Fundamental risk is risk to those cash flows and payouts–the possibility that they might not pay out.  We can illustrate its impact with an example.

In the previous scenario, you estimated fair value for the government bond to be $67.55, 4% YTM.  Let’s assume that you’re now forced to invest your entire net worth into either that bond at that price, or into a new type of security that’s been introduced into the market, a “Lotto Share.”

To reiterate, your choice:

(1) Zero Coupon 10 Yr Government Bonds, 4% YTM, Price $67.55, Par Value $100.

(2) Zero Coupon 10 Yr Lotto Shares, Class “A”.

Lotto Share: A government bond with a random payout.  Lotto Shares are issued in separate share classes.  At the maturity date of each share class, the government flips a fair coin.  If the coin ends up heads (50% chance), the government exchanges each outstanding share in the share class for a payment of $200.  If the coin ends up tails (50% chance), the government makes no exchange, and each outstanding share in the share class expires worthless.

Before you make your choice, note that the Lotto Shares being offered all come from the same class, Class “A.”  All of their payouts will therefore be decided by the same single coin flip, to take place at maturity 10 years from now.

The question:  What is fair value for a Lotto Share?

To answer the question, try to imagine that you’re actually in the scenario, forced to choose between the two options.  What price would Lotto Shares have to sell at in order for you to choose to invest  in them?  Would $67.55 be appropriate?  How about $50?  $25?  $10?  $5?  $1?  One penny?  Is there any price that would interest you?

It goes without saying that your answer will depend on whether you can diversify among the two options.  Having the entirety of your portfolio, or even a sizeable portion thereof, invested in a security that has a 50% chance of becoming worthless represents an enormous risk.  You would need the prospect of an enormous potential reward in order to take it–if you were willing to take it at all.  But if you have the option to invest much smaller portions of your portfolio into the security, if not simply for the “fun” of doing so, the potential reward won’t need to be as large.

Assume that you do have the ability to diversify between the two options.  The question will then take on a second dimension: allocation.  At each potential price for Lotto Shares, ranging from zero to infinity, how much of your portfolio would you choose to allocate to them?

Let’s assume that Lotto Shares are selling for the same price as normal government bonds, $67.55.  How much of your portfolio would you choose to put into them?  If you’re like most investors, your answer will be 0%, i.e., nothing.  To understand why, notice that Lotto Shares have the same expected (average) payout as normal government bonds, $100 ($200 * 50% + $0 * 50% = $100).  The difference is that they pay that amount with double-or-nothing risk–at maturity, you’re either going to receive $200 or $0.  That risk requires compensation–an excess return–over the risk-free alternative.  Lotto Shares priced identically to normal government bonds (the risk-free alternative) do not offer such compensation, therefore you’re not going to want to allocate anything to them.  You’ll put everything in the normal government bond.

Now, in theory, we can envision specific situations where you might actually want double-or-nothing risk.  For example, you might need lifesaving medical treatment, and only have half the money needed to cover the cost.  In that case, you’ll be willing to make the bet even without compensation–just flip the damn coin.  If it comes back heads, you’ll survive, if it comes back tails… who cares, you would have died anyways.  Alternatively, you might be managing other people’s money under a perverse “heads-you-win, tails-they-lose” incentive arrangement.  In that case, you might be perfectly comfortable submitting the outcome to a coin flip, without receiving any extra compensation for the risk–it’s not a risk to you.  But in any normal, healthy investment situation, that’s not going to be the case.  Risk will be unwelcome, and you won’t willingly take it on unless you get paid to do so.

Note that the same point holds for price risk and inflation risk.  Prices can go up in addition to down, and inflation can go down in addition to up.  You can get lucky and end up benefitting from having taken those risks.  But you’re not a gambler.  You’re not going to take them unless you get compensated.

The price and allocation question, then, comes down to a question of compensation: at each level of potential portfolio exposure, what expected (or average) excess return over the risk-free alternative (i.e., normal government bonds) is necessary to compensate for the double-or-nothing risk inherent in Lotto Shares?  The following table lists the expected 10 year annualized excess returns for Lotto Share at different prices.  Note that these are expected returns.  They’re only going to hold on average–in actual practice, you’re going to get double-or-nothing, because the outcome is going to be submitted to only one flip.


We can pose the price and allocation question in two different directions:

(1) (Allocation –> Price): Starting with an assumed allocation–say, 40%–we could ask: what price and excess return for Lotto Shares would be needed to get you to allocate that amount, i.e., risk that amount in a coin flip?

(2) (Price –> Allocation): Starting with an assumed price–say, $25, an annual excess return of 10.87%–we could ask: how much of your portfolio would you choose to allocate to Lotto Shares, if offered that price?

Up to now, we’ve focused only on fundamental risk, i.e., risk to a security’s cash payouts.  In a real world situation, we’ll need to consider price risk.  As discussed earlier, price risk requires compensation in the form of an excess return over the “price-risk-free” alternative, cash.  But notice that in our scenario, we don’t have the option of holding cash.  Our options are to invest in Lotto Shares or to invest in normal government bonds.  The factor that requires compensation, then, is the difference in price risk between these two options.

Because Lotto Shares carry fundamental risk, their price risk will be greater than the price risk of normal government bonds.  As a general rule, fundamental risk creates its own price risk, because it forces investors to grapple with the murky question of how that risk should be priced, along with the even murkier question of how others in the market will think it should be priced (in the Keynesian beauty contest sense).  Additionally, as normal government bonds approach maturity, their prices will become more stable, converging on the final payment amount, $100.  As Lotto Shares approach maturity, the opposite will happen–their prices will become more volatile, as more and more investors vacillate on whether to stay in or get out in advance of the do-or-die coin flip.

That said, price risk is not the primary focus here.  To make it go away as a consideration, let’s assume that once we make our initial purchases in the scenario, the market will close permanently, leaving us without any liquidity in either investment.  We’ll have to hold until maturity.  That would obviously be a disadvantage relative to a situation where we had liquidity and could sell, but the disadvantage applies equally to both options, and therefore cancels out of the pricing analysis.

Returning to the question of Lotto Share pricing, for any potential investor in the market, we could build a mapping between each possible price for a Lotto Share, and the investor’s preferred allocation at that price.  Presumably, at all prices greater than $67.55 (the price of the normal government bond), the investor’s preferred allocation will be 0%.  As the price is reduced below that price, the preferred allocation will increase, until it hits a ceiling representing the maximum percentage of the portfolio that the investor would be willing to risk in a coin flip, regardless of how high the potential payout might be.  The mappings will obviously be different for different investors, determined by their psychological makeups and the specific financial and life circumstances they are in.

I sat down and worked out my own price-allocation mapping, and came up with the table shown below.  The first column is the Lotto Share price.  The second column is my preferred allocation at that price.  The third and fourth column are the absolute dollar amounts of the excess gains (on heads) and excess losses (on tails) that would be received or incurred if a hypothetical $1,000,000 portfolio were allocated at that percentage:


Working through the table, if I were managing my own $1,000,000 portfolio, and I were offered a Lotto Share price of $65, I would be willing to invest 1%, which would entail risking $14,800 in a coin flip to make $25,969 on heads.  If I were offered a price of $40, I would be willing to invest 5%, which would entail risking $74,000 in a coin flip to make $226,000 on heads.  If I were offered $15, I would be willing to invest 20%, which would entail risking $296,000 in a coin flip to make $2,750,667 on heads.  And so on.

Interestingly, I found myself unwilling to go past 20%.  To put any larger amount at risk, I would need the win-lose odds to be skewed in my favor.  In Lotto Shares, they aren’t–they’re even 50/50.  What’s skewed in my favor is the payout if I happen to win–that’s very different.

The example illustrates the extreme impact that risk-aversion has on asset valuation and asset allocation.  To use myself as an example, you could offer me a bargain basement price of $5 for a Lotto Share, corresponding to a whopping 35% expected annual return over 10 years, and yet if that expected return came with double-or-nothing risk attached, I wouldn’t be willing to allocate anything more than a fifth of my assets to it.

Interestingly, when risk is extremely high, as it is with Lotto Shares, the level of interest rates essentially becomes irrelevant.  Suppose that you wanted to get me to allocate more than 20% of my portfolio to Lotto Shares.  To push me to invest more, you could drop the interest rate on the government bond to 2%, 0%, -2%, -4%, -6%, and so on–i.e., try to “squeeze” me into the Lotto Share, by making the alternative look shitty.  But if I’m grappling with the possibility of a 50% loss possibility on a large portion of my portfolio, your tiny interest rate reductions will make no difference at all to me.  They’re an afterthought.  That’s why aggressive monetary policy is typically ineffective at stimulating investment during downturns.  To the extent that investors perceive investments to be highly risky, they will require huge potential rewards to get involved. Relative to those huge rewards, paltry shifts in the cost of borrowing or in the interest rate paid for doing nothing will barely move the needle.

I would encourage you to look at the table and try to figure out how much you would be willing to risk at each of the different prices.  If you’re like me, as you grapple with the choice, you will find yourself struggling to find a way to get a better edge on the flip, or to somehow diversify the bet.  Unfortunately, given the constraints of the scenario, there’s no way to do either.

Interestingly, if the price-allocation mapping of all other investors in the market looked exactly like mine, Class “A” Lotto Shares would never be able to exceed 20% of the total capitalization of the market.  No matter how much it lowered the price, the government would not be able to issue any more of them beyond that capitalization, because investors wouldn’t have any room in their portfolios for the additional risk.

Adding New Lotto Share Classes to the Market

Let’s examine what happens to our estimate of the fair value of Lotto Shares when we add new share classes to the market.

Assume that three new share classes are added, so that the we now have four –“A”, “B”, “C”, “D”.  Each share class matures in 10 years, and pays out $200 or $0 based on the result of a single coin flip.  However, and this is crucial, each share class pays out based on its own separate coin flip.  The fundamental risk in each share class is therefore idiosyncratic–independent of the risks in the other share classes.

To summarize, then, you have to invest your net worth across the following options:

(1) Zero Coupon 10 Yr Government Bonds, 4% YTM, Price $67.55, Par Value $100.

(2) Zero Coupon 10 Yr Lotto Shares, Class “A”.

(3) Zero Coupon 10 Yr Lotto Shares, Class “B”.

(4) Zero Coupon 10 Yr Lotto Shares, Class “C”.

(5) Zero Coupon 10 Yr Lotto Shares, Class “D”.

The question: What is fair value for a Lotto Share in this scenario?

Whatever our fair value estimate happens to be, it should be the same for all Lotto Shares in the market, given that those shares are identical in all relevant respects.  Granted, if the market supplies of the different share classes end up being different, then they might end up trading at different prices, similar to the way different share classes of preferred stocks sometimes trade at different prices.  But, as individual securities, they’ll still be worth the same, fundamentally.

Obviously, if you choose to allocate to Lotto Shares in this new scenario, you’re going to want to diversify your exposure equally across the different share classes.  That will make the payout profile of the investment more attractive.  Before, you only had one share class to invest in–Class “A”.  The payout profile of that investment was a 50% chance of $200 (heads) and a 50% chance of $0 (tails).  If you add a new share class to the mix, so that you have an equal quantity of two in the portfolio, your payout will be determined by two coin flips instead of one–a coin flip that decides your “A” shares and a coin flip that decides your “B” shares.  On a per share basis, the payout profile will then be a 25% chance of receiving $200 (heads for “A”, heads for “B”), a 50% chance of receiving $100 (heads for “A”, tails for “B” or tails for “B”, heads for “A”), and a 25% chance of receiving $0 (tails for “A”, tails for “B”).  If you add two more shares classes to the mix, so that you have an equal quantity of four in the portfolio, the payout profile will improve even further, as shown in the table below.

(Note: The profile follows a binomial distribution.)


In the previous scenario, the question was, what excess return over normal government bonds would Lotto Shares need to offer in order to get you to invest in them, given that the investment has a 50% chance of paying out $200 and a 50% chance of paying out $0?  With four share classes in the mix, the question is the same, except that the investment, on a per share basis, now has a 6.25% chance of paying out $0, a 25% chance of paying out $50, a 37.5% chance of paying out $100, a 25% chance of paying out $150, and a 6.25% chance of paying out $200.  As before, the expected payout is $100 per share.  The difference is that this expected payout comes with substantially reduced risk.  Your risk of losing everything in it, for example, is longer 50%.  It’s 6.25%, a far more tolerable number.

Obviously, given the significant reduction in the risk, you’re going to be willing to accept a much lower excess return in the shares to invest in them, and therefore you’ll be willing to pay a much higher price.  In a way, this is a very surprising conclusion.  It suggests that the estimated fair value of a security in a market can increase simply by the addition of other, independent securities into the market.  If you have an efficient mechanism through which to diversify across those securities, you won’t need to take on the same risk in owning each individual one.  But that risk was precisely the basis for there being a price discount and an excess return in the shares–as it goes away, the discount and excess return can go away.

In the charts below, we show the payout profiles for Lotto Share investments spread equally across 100, 1,000, and 10,000 different Lotto Share Classes.  As you can see, the distribution converges ever more tightly around the expected (average) $100 payout per share.




As you can see from looking at this last chart, if you can invest across 10,000 independent Lotto Shares, you can effectively turn your Lotto Share investment into a normal government bond investment–a risk-free payout.  In terms of the probabilities, the cumulative total payout of all the shares (which will be determined by the number of successful “heads” that come up in 10,000 flips), divided by the total number of shares, will almost always end up equaling a value close to $100, with only a very tiny probabilistic deviation around that number.  In an extreme case, the aggregate payout may end up being $98 per share or $102 per share–but the probability that it will be any number outside that is effectively zero.  And so there won’t be any reason for Lotto Shares to trade at any discount relative to normal government bonds.  The excess returns that had to be priced into them in earlier scenarios where their risks couldn’t be pooled together will be able to disappear.

Equities as Lotto Shares

Dr. Hendrik Bessembinder of Arizona State University recently published a fascinating study in which he examined the return profiles of individual equity securities across market history.  He found that the performance is highly positively skewed.  Most individual stocks perform poorly, while a small number perform exceptionally well.  The skew is vividly illustrated in the chart below, which shows the returns of 54,015 non-overlapping samples of 10 year holding periods for individual stocks:


The majority of stocks in the sample underperformed cash.  Almost half suffered negative returns.  A surprisingly large percentage went all the way down to zero.  The only reason the market as a whole performed well was because a small number of “superstocks” generated outsized returns.  Without the contributions of those stocks, average returns would have been poor, well below the returns on fixed income of a similar duration.  To say that individual stocks are “risky”, then, is an understatement.  They’re enormously risky.

As you can probably tell, our purpose in introducing the Lotto Shares is to use them to approximate the large risk seen in individual equity securities.  The not-so-new insight is that by combining large numbers of them together into a single equity investment, we can greatly reduce the aggregate risk of that investment, and therefore greatly reduce the excess return needed to compensate for it.

This is effectively what we’re doing when we go back into the data and build indices in hindsight.  We’re taking the chaotic payout streams of individual securities in the market (the majority of which underperformed cash) and merging them together to form payout streams that are much smoother and well-behaved.  In doing so, we’re creating aggregate structures that carry much lower risk than the actual individual securities that the actual investors at the time were trading.  The fact that it may have been reasonable for those investor to demand high excess returns over risk-free alternatives when they were trading the securities does not mean that it would be similarly reasonable for an investor today, who has the luxury of dramatically improved market infrastructure through which to diversify, to demand those same excess returns.

When we say that stocks should be priced to deliver large excess returns over long-term bonds because they entail much larger risks, we need to be careful not to equivocate on that term, “risk”.  The payouts of any individual stock may carry large risks, but the payouts of the aggregate universe of stocks do not.  As the chart below shows, the aggregate equity payout is a stream of smooth, reasonably well-behaved cash flows, especially when the calamity of the Great Depression (a likely one-off historical event) is bracketed out.

(Note: We express the dividend stream on a real total return basis, assuming each dividend is reinvested back into the equity at market).


In terms of stability and reliability, that stream is capable of faring quite well in a head-to-head comparison with the historical real payout stream of long-term bonds.  Why then, should it be discounted relative to bonds at such a high annual rate, 4%?

A similar point applies to the so-called “small cap” risk premium.  As Bessembinder’s research confirms, individual small company performance is especially skewed.  The strict odds of any individual small company underperforming, or going all the way to zero, is very high–much higher than for large companies.  Considered as isolated individual investments, then, small companies merit a substantial price discount, a substantial excess return, over large companies.  But when their risks are pooled together, the total risk of the aggregate goes down.  To the extent that investors have the ability to efficiently invest in that aggregate, the required excess return should come down as well.

The following chart shows the historical dividend stream (real total return basis) of the smallest 30% of companies in the market alongside that of the S&P 500 from January 1928 to March 2017:


Obviously, pooling the risks of individual small caps together doesn’t fully eliminate the risk in their payouts–they share a common cyclical risk, reflected in the volatility of the aggregate stream.  If we focus specifically on the enormous gash that took place around the Great Depression, we might conclude that a 2% discount relative to large caps is appropriate.  But when we bracket that event out, 2% starts to look excessive.


Progress in Diversification: Implications for the Cost of Capital

In the earlier scenarios, I told you up front that each class of Lotto Shares has a 50% chance of paying out $200.  In an actual market, you’re not going to get that information so easily.  You’re going to have to acquire it yourself, by doing due diligence on the individual risk asset you’re buying.  That work will translate into time and money, which will subtract from your return.

To illustrate, suppose that there are 10,000 Lotto Share Classes in the market: “A”, “B”, “C”, “D”, “E”, etc.  Each share class pays out P(A), P(B), P(C), P(D), P(E), etc., with independent probabilities Pr(A), Pr(B), Pr(C), Pr(D), Pr(E), etc.  Your ability to profitably make an investment that diversifies among the different share classes is going to be constrained by your ability to efficiently determine what all of those numbers are.  If you don’t know what they are, you won’t have a way to know what price to pay for the shares–individually, or in a package.

Assume that it costs 1% of your portfolio to determine each P and Pr for an individual share class.  Your effort to put together a well-diversified investment in Lotto Shares, an investment whose payout mimics the stability of the normal government bond’s payout, will end up carrying a large expense.  You will either have to pay that expense, or accept a poorly diversified portfolio, with the increased risk.  Both disadvantages can be fully avoided in a government bond, and therefore to be willing to invest in the Lotto Share, you’re going to need to be compensated.  As always, the compensation will have to come in the form of a lower Lotto Share price, and a higher return.

Now, suppose that the market develops mechanisms that allows you to pool the costs of building a diversified Lotto Share portfolio together with other investors.  The cost to you of making a well-diversified investment will come down.  You’ll therefore be willing to invest in Lotto Shares at higher prices.

Even better, suppose that the investment community discovers that it can use passive indexing strategies to free-load on the fundamental Lotto Share work that a small number of active investors in the market are doing.  To determine the right price to pay, people come to realize that they can drop all of the fretting over P, Pr, and so on, and just invest across the whole space, paying whatever the market is asking for each share–and that they won’t “miss” anything in terms of returns.  The cost of diversification will come down even further, providing a basis for Lotto Share prices to go even higher, potentially all the way up to the price of a normal government bond, a price corresponding to an excess return of 0%.

The takeaway, then, is that as the market builds and popularizes increasingly cost-effective mechanisms and methodologies for diversifying away the idiosyncratic risks in risky investments, the price discounts and excess returns that those investments need to offer, in order to compensate for the costs and risks, comes down.  Very few would dispute this point in other economic contexts.  Most would agree, for example, that the development of efficient methods of securitizing mortgage lending reduces the cost to lenders of diversifying and therefore provides a basis for reduced borrowing costs for homeowners–that’s its purpose. But when one tries to make the same argument in the context of stocks–that the development of efficient methods to “securitize” them provides a basis for their valuations to increase–people object.

In the year 1950, the average front load on a mutual fund was 8%, with another 1% annual advisory fee added in.  Today, given the option of easy indexing, investors can get convenient, well-diversified exposure to many more stocks than would have been in a mutual fund in 1950, all for 0%.  This significant reduction in the cost of diversification warrants a reduction in the excess return that stocks are priced to deliver, particularly over safe assets like government securities that don’t need to be diversified.  Let’s suppose with all factors included, the elimination of historical diversification costs ends up being worth 2% per year in annual return.  Parity would then suggest that stocks should offer a 2% excess return over government bonds, not the historical 4%. Their valuations would have a basis to rise accordingly.

Now, to clarify.  My argument here is that the ability to broadly diversify equity exposure in a cost-effective manner reduces the excess return that equities need to offer in order to be competitive with safer asset classes.  In markets where such diversification is a ready option–for example, through low-cost indexing–valuations deserve to go higher. But that doesn’t mean that they actually will go higher.  Whether they actually will go higher is not determined by what “deserves” to happen, but by what buyers and sellers actually choose to do, what prices they agree to transact at.  They can agree to transact at whatever prices they want.

The question of whether the increased availability and popularity of equity securitization has caused equity valuations to go higher is an interesting question.  In my view, it clearly has.  I would offer the following chart as circumstantial evidence.


Notice the large, sustained valuation jump that took place in the middle of the 1990s. Right alongside it, there was a large, sustained jump in the percentage of the equity market invested through mutual funds and ETFs.  Correlation is not causation, but there are compelling reasons to expect a relationship in this case.  Increased availability and popularity of vehicles that allow for cheap, convenient, well-diversified market exposure increases the pool of money inclined to bid on equities as an asset class–not only during the good times, but also when buying opportunities arise.  It’s reasonable to expect that the result would be upward pressure on average valuations across the cycle, which is exactly what we’ve seen.

History: The Impact of Learning and Adaptation

One problem with using Lotto Shares as an analogy to risk assets, equities in particular, is that Lotto Shares have a definite payout P and a definite probability Pr that can be known and modeled.  Risk assets don’t have that–the probabilities around their payouts are themselves uncertain, subject to unknown possibility.  That uncertainty is risk–in the case of equities, it’s a substantial risk.

If we’re starting out from scratch in an economy, and looking out into the future, how can we possibly know what’s likely to happen to any individual company, or to the corporate sector as a whole?  How can we even guess what those probabilities are?

But as time passes, a more reliable recorded history will develop, a set of known experiences to consult.  As investors, we’ll be able to use that history and those experiences to better assess what the probabilities are, looking out into the future.  The uncertainty will come down–and with it the excess return needed to justify the risks that we’re taking on.

We say that stocks should be expensive because interest rates are low and are probably going to stay low forever.  The rejoinder is: “Well, they were low in the 1940s and 1950s, yet stocks weren’t expensive.”  OK, but so what?  Why does that matter?  All it means is that, in hindsight, investors in the 1940s and 1950s got valuations wrong.  Should we be surprised?

Put yourself in the shoes of an investor in that period, trying to determine what the future for equities might look like.  You have the option of buying a certain type of security, a “stock”, that pays out company profits.  In the aggregate, do you have a way to know what the likely growth rates of those profits will be over time?  No.  You don’t have data.  You don’t have a convenient history to look at.  Consequently, you’re not going to be able to think about the equity universe in that way.  You’re going to have to stay grounded at the individual security level, where the future picture is going to be even murkier.

In terms of price risk, this is what the history of prices will look like from your vantage point:


Judging from the chart, can you reliably assess the risk of a large upcoming drop?  Can you say, with any confidence, that if a drop like the one that happened 20 odd years ago happens again, that it will be recovered in due course?  Sure, you might be able to take solace in the fact that the dividend yield, at 5.5%, is high.  But high according to who? High relative to what?  The yield isn’t high relative to what it was just a few years ago, or to what it was after the bubble burst.  One can easily envision cautious investors pointing that out to you.  Something like this, taken right out of that era:


Now, fast forward to the present day.  In terms of estimating future growth rates and returns on investment, you have the chart below, a stream of payouts that, on a reinvested basis, has grown at a 6% average real rate over time, through the challenges of numerous economic cycles, each of which was different in its own way.  Typical investors may not know the precise number, but they’re aware of the broader historical insight, which is that equities offer the strongest long-term growth potential of any asset class, that they’re where investors should want to be over the long haul: “Stocks For The Long Run.”  That insight has become ingrained in the financial culture.  One can say that its prevalence is just another symptom of the “bubble” that we’re currently in, but one has to admit that there’s at least some basis for it.


Now, I’ll be the first to acknowledge that the 6% number is likely to be lower going forward. In fact, that’s the whole point–equity returns need to be lower, to get in line with the rest of the asset universe.  The mechanism for the lower returns, in my view, is not going to be some kind of sustained mean-reversion to old-school valuations, as the more bearishly inclined would predict.  Rather, it’s going to come directly from the market’s expensiveness itself, from the fact that dividend reinvestments, buybacks and acquisitions will all be taking place at much higher prices than they did in the past.  On the assumption that current valuations hold, I estimate that long-term future returns will be no more than 4% real.  To get that number, I recalculate the market’s historical prices based on what they would have been if the market had always traded at its current valuation–a CAPE range of 25 to 30.  With the dividends reinvested at those higher prices, I then calculate what the historical returns would have been.  The answer: 4% real, reflecting the impact of the more expensive reinvestment, which leads to fewer new shares purchased, less compounding, and a lower long-term return.  Given the prices current investors are paying, they have little historical basis for expecting to earn any more than that.  If anything, they should expect less.  The return-depressing effect of the market’s present expensiveness is likely to be amplified by the fact that there’s more capital recycling taking place today–more buybacks, acquisitions, etc., all at expensive prices–and less growth-producing real investment.  So 4% represents a likely ceiling on returns, not a floor.

Regardless of the specific return estimate that we settle on, the point is, today, the facts can be known, and therefore things like this can be realistically modeled–not with anything close to certainty, but still in a way that’s useful to constrain the possibilities.  Investors can look at a long history of US equity performance, and now also at the history of performance in other countries, and develop a picture of what’s likely to happen going forward.  In the distant past, investors did not have that option.  They had to fly blind, roll the dice on this thing called the “market.”

In terms of price risk, this is what your rear view mirror looks like today:


Sure, you might get caught in a panic and lose a lot of money.  But history suggests that if you stick to the process, you’ll get it back in due course.  That’s a basis for confidence. Importantly, other investors are aware of the same history that you’re aware of, they’ve been exposed to the same lessons–“think long-term”, “don’t sell in a panic”, “stocks for the long run.”  They therefore have the same basis for confidence that you have.  The result is a network of confidence that further bolsters the price.  Panics are less likely to be seen as reasons to panic, and more likely to be seen as opportunities to be taken advantage of. Obviously, panics will still occur, as they must, but there’s a basis for them to be less chaotic, less extreme, less destructive than they were in market antiquity.

Most of the historical risk observed in equities is concentrated around a single event–the Great Depression.  In the throes of that event, policymakers faced their own uncertainties–they didn’t have a history or any experience that they could consult in trying to figure out how to deal with the growing economic crisis.  But now they do, which makes it extremely unlikely that another Great Depression will ever be seen.  We saw the improved resilience of the system in the 2008 recession, an event that had all of the necessary ingredients to turn itself into a new Great Depression.  It didn’t–the final damage wasn’t even close to being comparable.  Here we are today, doing fine.

An additional (controversial) factor that reduces price risk relative to the past is the increased willingness of policymakers to intervene on behalf of markets.  Given the lessons of history, policymakers now have a greater appreciation for the impact that market dislocations can have on an economy.  Consequently, they’re more willing to actively step in to prevent dislocations from happening, or at least craft their policy decisions and their communications so as to avoid causing dislocations.  That was not the case in prior eras.  The attitude towards intervention was moralistic rather than pragmatic.  The mentality was that even if intervention might help, it shouldn’t happen–it’s unfair, immoral, a violation of the rules of the game, an insult to the country’s capitalist ethos. Let the system fail, let it clear, let the speculators face their punishments, economic consequences be damned.

To summarize: over time, markets have developed an improved understanding of the nature of long-term equity returns.  They’ve evolved increasingly efficient mechanisms and methodologies through which to manage the inherent risks in equities.  These improvements provide a basis for average equity valuations to increase, which is something that has clearly been happening.

Posted in Uncategorized | Leave a comment

A Value Opportunity in Preferred Stocks


The current market environment is made difficult by the fact that investors have nowhere that they can go to confidently earn a decent return.  There are no good deals to be found anywhere, in any area of the investment universe.  Some see that as a failure of markets, but I see it as an achievement.  A market that is functioning properly should not offer investors good deals.  When adjusted for risk, every deal that’s on the table should be just as good as every other.  If any deal shows itself to be any better than any other, market participants should notice it and quickly take it off the table.

We live in a world in which there is a large demand for savings, but a small relative supply of profitable new investment opportunities to deploy those savings into.  We can debate the potential causes of this imbalance–aging demographics, falling population growth, stagnation in innovation, zero-sum substitution of technology for labor, globalization, rising wealth inequality, excessive debt accumulation, and so on.  But the effect is clear: central banks have to set interest rates at low levels in order to stimulate investment, encourage consumption, and maintain sufficient inflationary pressure in the economy.  The tool they use may not work very well, but it’s the only tool they have.

Low interest rates, of course, mean low returns for whoever decides to hold the economy’s short-term money.  In a properly functioning market, those low returns should not stay contained to themselves.  They should propagate out and infect the rest of the investment universe.  And that’s exactly what we’ve seen them do.  As it’s become clear that historically low interest rates are likely to persist long out into the future–and quite possibly forever–every item on the investment menu has become historically expensive.

Thinking concretely, what types of things can a value-conscious investor do to cope with the current environment?  Personally, I can only think of two things: (1) Figure out a way to time the market, or (2) Try to find places inside the market where value still exists. With respect to the first, market timing, I already shared my best idea, which is to go to cash when both the price trend and the trend in economic fundamentals are negative, and to be long equities in all other circumstances–regardless of valuation.  That approach continues to work–it’s still long the market, and hasn’t fallen prey to any of the usual fake-outs (fears of recession, concerns about valuation, etc.).  With respect to the second, finding value inside the market, I think I know of a good place.  That’s what this piece is going to be about.

The specific part of the market that I’m going to look at is the space of preferred stocks, a space riddled with inefficiencies.  There are two individual securities in that space that I consider to be attractive values: two large bank convertible preferred issues.  At current prices, they both yield around 6.15%.  They carry very little credit risk, they can’t be called in, and their dividends are tax-advantaged.  The fact that they could be priced so attractively in a market filled with so much mediocrity is proof that markets are not always efficient.

I should say at the outset that I don’t have a strong view on the near-term direction of long-term interest rates.  My bias would be to bet against the consensus that they’re set to rise appreciably from here, but I can’t make that bet with any confidence.  If they do rise appreciably, the securities that I’m going to mention will perform poorly, along with pretty much everything else in the long-term fixed income space.  So if that’s your base case, don’t interpret my sharing them as any kind of recommendation to buy.  Treat them instead as ideas to put on a fixed income shopping list, to consult when the time is right.

The piece has five sections (click on the hyperlinks below to fast-forward to any of them):

  • In the first section, I explain how preferred stocks work. (Highlight: A helpful “simultaneous trade” analogy that investors can use in thinking about and evaluating the impact of callability.)
  • In the second section, I analyze the valuation of preferred stocks as a group, comparing their present and historical yields to the yields on high yield corporate, investment grade corporate, emerging market USD debt, and treasury debt.  I also quantify the value of the embedded tax-advantage they offer. (Highlight: Tables and charts comparing yields and spreads on different fixed income sectors.  Periods examined include 1997 to 2017 and 1910 to 1964.)
  • In the third section, I discuss the unique advantages that financial preferred stocks offer in the current environment. (Highlight: A chart of the Tangible Common Equity Ratios of the big four US banks, showing just how strong their balance sheets are at present.)
  • In the fourth section, I introduce the two preferred stocks and examine the finer details of their structures. (Highlight: A price chart and a table that simplifies all of the relevant information)
  • In the fifth section, I make the case for why the two preferred stocks are attractive values.  I also offer possible reasons why the market has failed to value them correctly, looking specifically at issues associated with duration, supply, and index exclusion. (Highlight: I look at one of the most expensive fixed income securities in the entire US market–a 1962 preferred issue of a major railroad company that still trades to this day.  I discuss how supply-related distortions have helped pushed it to its currently absurd valuation.)

Preferred Stocks: A Primer

Recall that a common stock is a claim on the excess profits of a corporation, which are ultimately paid out as dividends over time.  A common stock is also a claim of control over the company’s activities, expressed through voting rights.  A preferred stock, in contrast, is a claim to receive fixed periodic dividend payments on the initial amount of money delivered to the company in the preferred investment–the “par” value of each preferred share.  Such a claim typically comes without any voting rights, but voting rights can sometimes be triggered if the promised payments aren’t made.  In a liquidation, preferred stock is senior to common stock, but subordinate to all forms of debt.

Importantly, a preferred stock’s claim to dividends is contingent upon the company actually being able to make the promised payments.  If the company can’t make those payments, it won’t go into default like it would for a missed bond payment.  Rather, it will simply be prohibited from paying out dividends to its common shareholders, and also from repurchasing any of its common shares.  This constraint is what makes preferred shares worth something as pieces of paper.  If a company fails to fulfill its obligations to its preferred shareholders, its common shareholders will have no prospect of earning cash flows on their investments, and therefore their shares–their pieces of paper–won’t carry value.

A preferred share can be cumulative or non-cumulative.  When a preferred share is cumulative, any past missed dividend payments, going all the way back to the share’s date of issuance, have to be paid in full before any common dividends can be paid or any common shares bought back.  When a preferred share is non-cumulative, this restraint is narrowed to a given period of time, usually a calendar quarter.  The company cannot pay dividends in a given calendar quarter or buy back shares in that quarter unless all preferred dividends owed for that quarter have been paid.

Preferred shares usually come with a call feature that allows the company to buy them back at par after some specified date.  The best way to conceptualize the impact of this feature is to think of a callable preferred share as representing two separate investment positions.  First, the preferred share itself, a perpetual security that pays out some fixed yield.  Second, a call option that is simultaneously sold on those shares.  When you buy a callable preferred, you’re effectively putting yourself into both types of trades–you’re purchasing a perpetual fixed income security, and you’re simultaneously selling a call option against it at a strike price of par, exerciseable after some specified date.

The existence of a call option on a preferred share significantly complicates its valuation.  For an illustration, let’s compare the case of a non-callable share with the case of a callable one.  In the first case, suppose that a company issues a non-callable 4% preferred share to an investor at a par value of $25.  Shortly after issuance, yields on similar securities fall from 4% to 3%.  The share has to compete with those securities, and so its price should rise to whatever price offers a 3% yield, matching theirs.  In the current case, that price would be $33 (logic: $1 / $33 = 3%).  But now suppose that the share comes with a call option that allows the company to redeem it at par, $25, in five years.  With the impact of the call option added in, a price of $33 will no longer makes sense.  If an investor were to buy at that price, and the security were to eventually be called in at par, $25, she would lose $8 per share on the call ($33 – $25 = $8).  Instead of being 3%, her total return would end up being negative.

For any assumed purchase price, then, the investor has to incorporate the call–both its impact on the total return if exercised, and its likelihood of being exercised–into the estimate of the total return.  In the above scenario, if we assume that the call option becomes exerciseable 5 years from now, and that it will, in fact, be exercised, then the right price for the shares, the price that implies a 3% yield competitive with the rest of the market, is not $33, but rather $26.16.  At that purchase price, the $5 of dividends that will be collected over the 5 years until the call date, minus the $1.16 that will be lost from the purchase price when the shares are called in at $25, will produce a final total return that annualizes to 3%, equal to the prevailing market rate.

Now, for some definitions.  The “current yield” of a security is its annual dividend divided by its market price.  The “yield-to-call” of a callable security is the total return that it will produce on the assumption that the investor holds it until the call date, at which point it gets called in.  The “yield-to-worst” of a callable security is the lesser of its current yield and its yield-to-call.  This yield is referred to as a yield to “worst” because it represents the worst case total return that an investor can expect to earn if she holds to maturity–assuming, of course, that the shares pay out as promised.

Companies typically decide whether or not to call in preferred shares based on whether they can get better rates in the market by issuing out new ones (and the new issuance need not be preferred–it could be debt or even other forms of equity, if the cost to the company is less).  For that reason, legacy preferred shares that were issued at yields substantially higher than the current market yield tend to behave like short-term fixed income securities.  Because their costs to the company are so much higher than the current market cost, the investor can be confident that the company will call them in on the call date.  Instead of treating them as long-term securities, then, she can treat them as securities that will soon mature at par.

As with a bond, we can separate the risk inherent in preferred shares into credit risk and interest rate risk.  The credit risk is the risk that the company will not be able to make the promised payments on the shares.  The interest rate risk is the risk that prevailing market interest rates on similar securities will change, causing the price of the security in question to change.

Looking more closely at this second risk, callable securities suffer from a unique disadvantage.  When interest rates rise after issuance, they behave like normal fixed income securities.  They fall in price, imposing losses on investors, until their market yields increase to a value that’s competitive with the new higher rates.  But, as we saw in the earlier example, when interest rates fall after issuance, callable securities are not able to rise to the same extent.  That’s because, as they go above par, the potential of a loss on a call is introduced, a loss that will subtract from the total return.  To compound the situation, as interest rates fall, a loss on a call becomes more likely, because calling the shares in and replacing them with new ones becomes more attractive to the company, given the better available rates.

Because the company has a call option that it can (and will) use to its own benefit (and to the shareholder’s detriment as its counterparty), preferred shares end up offering all of the potential price downside of long-term fixed income securities, with only a small amount of the potential price upside.  When it’s bad to be a long-term bond, they act like long-term bonds.  When it’s good to be a long-term bond, they morph into short-term bonds, and get called in.  Now, you might ask, given this unfavorable skew, why would anyone want to own callable preferred shares?  The answer, of course, is that every security makes sense at some price.  Callable preferred shares do not offer the upside of non-callable long-term fixed income securities, but to compensate, they’re typically priced to offer other advantages, such as higher current yields.

Importantly, when a preferred share is trading at a high current yield relative to the market yield, the investor receives a measure of protection from the impact of rising interest rates (or, if we’re focused on real returns, the impact of rising inflation).  If interest rates rise, one of two things will happen, both of which are attractive to the shareholder.  Either the shares will not be called in, and she will actually get to earn that high current yield over time (which she would not have otherwise gotten to earn), or the shares will be called in, and she will get pulled out of the security, at which point she will be able to take her money and go invest in a better deal.

Preferred Stocks: Assessing the Valuations

The following chart shows the average yield-to-worst (YTW) of preferred stocks alongside the average YTWs of other fixed income asset classes from January 31, 1997 to January 31, 2017, the latest date for which preferred YTW information is available:

ytw pref plus rest

(“Preferred” = BAML US Preferred, Bank Capital, and Capital Securities index, “HY Corp” = Barclays US Corporate High-Yield Index, “IG Corp” = Barclays US Corporate Index, “EM USD” = Barclays Emerging Market USD Index, “10 Yr Tsy” = 10-Year Treasury Constant Maturity Rate, FRED: DGS10)

Some might challenge this chart on the grounds that preferred stocks are perpetual securities that shouldn’t be compared to bonds, which have maturity dates.  The point would be valid if we were evaluating preferred stocks on their current yields.  But we’re not.  We’re looking specifically at yields-to-worst, which assume that all preferred stocks trading above par get called in on some future date (typically inside of a 5 year period).  On that assumption, preferred stocks as a group are not perpetual, but have some average finite term, like bonds.  Note that if we were to treat preferred stocks as perpetual securities, the yields shown in the chart would be current yields, which are meaningfully higher than YTWs.  For perspective, as of January 31, the current yield for preferreds was 5.53%, versus the 4.78% YTW shown in the chart.

That said, the chart is admittedly susceptible to distortions associated with the fact that the average durations and average credit qualities of the different asset classes may have changed over time, impacting what would be an “appropriate” yield for each of them in any given period.  There’s no easy way to eliminate that susceptibility, but I would argue that any potential distortion is likely to be small enough to allow the chart to still offer a general picture of where valuations are.

Let’s look more closely at spreads between preferreds and other fixed-income asset classes.  The following two charts show YTW spreads of high-yield and EM USD debt over preferreds.  As you can see, spreads have come down substantially and are now well below the average for the period, indicating that preferreds have become cheaper on a relative basis:



The following charts show YTW spreads of preferreds over investment-grade corporates and US treasuries.  As you can see, spreads over corporates have increased and are slightly higher than the average, again indicating that preferreds have become cheaper on a relative basis.  Versus treasuries, spreads are roughly on the average (with the average having been pushed up significantly by the temporary spike that occurred in 2008).



The above data is summarized in the following table:


The conclusion, then, is that US preferred stocks are priced attractively relative to the rest of the fixed income space.  They aren’t screaming bargains by any means, but they look better than the other options.  They also look better than US equities, which are trading at nosebleed levels, already well above the peak valuations of the prior cycle.

Now, in comparing yields on these asset classes, we’ve failed to consider an important detail.  Preferred dividends are paid out of corporate profits that have already been taxed by the federal government at the corporate level.  They are therefore eligible for qualified federal dividend tax rates15% for most investors, and 23.8% for the top bracket of earners.  Bond income, in contrast, is deducted from corporate revenues as interest expense, and therefore does not get taxed by the federal government at the corporate level. It’s therefore taxed at the ordinary income rate–28% for most investors, and 43.4% for the top bracket.  Though often missed in comparisons between bond and preferred income, this difference is huge.

The following table shows the current tax-equivalent YTW of preferred shares versus the YTWs of the other fixed income categories.  For top earners, the tax advantage gives preferred shares an additional 166 bps in pre-tax yield; for normal earners, an additional 86 bps.


The significance of this advantage should not be understated.  With pension assets included, over 60% of all U.S. household financial assets are exposed to income taxation (source: FRB Z.1 L.117.24/L.101.1).  Of that 60%, a very large majority is owned by high-net-worth individuals that pay taxes at the top rates.  Preferreds effectively allow them to cut those rates in half.

Right now, there’s no shortage of people highlighting the fact that U.S. common equity, represented by the S&P 500 index, is extremely expensive, trading at valuations that are multiple standard-deviations above historical averages.  But here’s an interesting piece of information.  With respect to preferred equity, the situation is somewhat reversed.  In past eras, particularly the period from 1937 to 1964, preferreds traded at very low yields.  Today’s yields can easily beat those yields, especially when the tax-advantage, which only came into place in 2003, is taken into account.  Prior to 2003, dividends were taxed at normal income rates, including during those periods when capital gains were taxed preferentially.

The following chart shows preferred yields of NYSE stocks from 1910 to 1964 (source: FRED M13048USM156NNBR).


Today’s tax-equivalent yield range of 5.64% to 6.44% is above the 5.05% average from 1910 to 1964, and significantly above the 4.2% average seen from 1937 to 1964, the latter half of the period.  I’ve seen many investors pine over the attractive equity valuations seen in the 1940s and 1950s, wishing it were possible to buy at those valuations today.  The good news, of course, is that it is possible, provided we’re talking about preferred equity! 😉

Advantages of Financial Preferred Stocks

In market antiquity, preferred shares were very popular.  For a fun illustration of their popularity, consider the following advertisement taken from a financial magazine published in 1928.  The recommended allocation to preferred stocks is 30%, the same as the bond allocation. Today, financial advisors tend to recommend a much smaller preferred allocation, if they recommend any at all.


The only entities in the current market with any real reason to issue preferred shares are depositary financial institutions–i.e., banks.  Preferred shares are attractive to banks because they count as Tier 1 capital under Basel rules.  Banks can use them to raise Tier 1 capital and meet minimum Tier 1 capital requirements without having to dilute common shareholders.  From a regulatory perspective, the reason preferred shares are treated as capital, and not as debt liabilities, is that a failure to make good on their promised payments will not trigger a default, an event with the potential to destabilize the banking system.  Rather, a failure on the part of a bank to pay its preferred shareholders will simply mean that its common shareholders can’t be paid anything.  The activation of that constraint will surely matter to common shareholders, but it need not matter to anyone else in the system.

From a shareholder’s perspective, financial preferred shares have a number of unique features that make them attractive.  These include:

(1) Counterbalancing Sources of Risk: The credit risk and interest rate risk in a financial preferred share, particularly one issued by a conventional bank, tend to act inversely to each other.  To illustrate:

Increased Interest Rate Risk –> Reduced Credit Risk:  When interest rates go up, preferred shares face downward price pressure.  But, at the same time, higher interest rates tend to increase bank profitability, particularly when the catalyst is an expanding economy.  Higher bank profitability, in turn, means a reduction in the risk that banks won’t be able to pay, i.e., a reduction in the credit risk of preferred shares.

Increased Credit Risk –> Reduced Interest Rate Risk:  In those situations where credit risk in preferred shares rises–situations, for example, where the banking sector faces losses associated with a weakening economy–interest rates will tend to fall.  Considered in isolation, falling interest rates put upward pressure on preferred prices, given that they’re fixed income securities.

Admittedly, in the current environment, one could argue that this effect has already been “maxed out”–i.e., that financial preferred securities are not currently viewed as carrying meaningful credit risk, and that they therefore aren’t likely to see much upward pressure in response to the credit risk “relief” that would come from an improving economy. Regardless of whether or not that’s true, the general point still holds: credit and interest rate risks in financial preferred shares tend to work in opposite directions.  We saw that clearly in earlier phases of the current cycle, when credit risk was considered to be meaningful.  The shares experienced upward price pressure in response to economic improvement, and were able to rise even as long-term interest rates were rising.

(2) Increased Regulation: With the passage of Dodd-Frank, banks face increased regulation.  Increased regulation reduces bank profitability and therefore acts as a drag on the value of common shares.  However, it boosts the value of preferred shares, because it makes their risk-reward proposition more attractive.

As a preferred shareholder in a bank, your biggest risk comes from the possibility that the bank might take on too much risk and fail.  That risk, if it’s realized, has the potential to bring the value of your investment all the way down to zero.  At the same time, your upside in the shares is limited–the most you can realistically expect to make in them over the long-term is the fixed yield that they’re paying you.  That yield has no way to increase in response to the profit growth that successful bank risk-taking can produce.  This means that if banks are taking on added risk to increase their profitability, you’re exposed to all of the losses and none of the gains–a losing proposition.  But in an environment like the current one, where bank risk-taking is closely regulated, and where the regulations are not so onerous as to completely eliminate bank profitability, you end up winning.  You continue to earn your promised income, while banks are prevented from putting your investment principal at risk.

Right now, there seems to be a consensus in the market that the election of Donald Trump will lead to significant changes to Dodd-Frank.  But that’s hardly a given.  Any legislative initiative will have to make it through congress, which is not an easy process.  Even if meaningful changes do make it into law, it’s unlikely that the regulatory framework will regress back to what it was pre-crisis.  All parties agree that banks need to be regulated to a greater extent than they were during that period.

(3) Strong Balance Sheets: To comply with the upcoming transition to Basel III, banks in the U.S. have had to significantly fortify their balance sheets.  Today, their balance sheets are in better shape than they’ve been in several decades.  In particular, the relative amount of common equity in U.S. banks, which serves as a potential cushion against preferred losses, is at its highest level since WW2.  That means reduced credit risk for bank preferreds.

The best metric to use in quantifying the amount of cushion that bank preferred shareholders have from losses is the tangible common equity ratio.  We take a bank’s tangible common equity (tangible assets minus all liabilities minus preferred equity at par) and divide by its tangible assets.  The result tells us how much of the bank’s tangible asset base is fully owned by common shareholders.  The portion of the balance sheet fully owned by common shareholders is the portion that preferred shareholders will be able to draw from to recover their principal in a liquidation.

The following chart shows the tangible common equity ratios of the big four U.S. banks: JP Morgan Chase, Wells Fargo, Bank of America, and Citigroup.  As you can see, the ratios have improved significantly.

tce ratios big 4

Now, to be fair, “bank equity” can be illusory.  Even when it maps to something real, it can disappear very quickly during crises. That said, having a a lot of it is still better than having a little, which means that bank preferred shareholders are in a much better position today than they were in prior periods.

(4) Too-Big-To-Fail: Regardless of what anyone might say, “too big to fail” is still a reality. It serves as a backstop on the creditworthiness of bank preferred shares, especially preferred shares issued by the big four money center banks: JP Morgan, Wells Fargo, Bank of America, and Citigroup.  We can think of these banks as heavily-regulated, government-backed utilities–ideal candidates for a preferred investment.

$WFC-L and $BAC-L: Two Unique Preferred Issues

Let’s now look at the two securities that will form the focus of the rest of the piece.  The first security is a Wells Fargo 7.50% Series L convertible preferred issue (prospectus), ticker $WFC-L, or $WFC-PL, or $WFC/PRL, depending on the quote platform being used.  The shares were originally issued as Wachovia shares in the early months of 2008.  They became full-fledged Wells Fargo shares, ranking on par with all other Wells Fargo preferred issues, upon Wells Fargo’s acquisition of Wachovia in December of that year (8-K).  The par value of each share is $1000, with each share paying out $18.75 per quarter in dividends, or $75 per year, 7.5%.  The current market price is around $1220, which equates to a current yield (and YTW) of roughly 6.15%.

The shares are particularly unique–indeed, precious, in my opinion–because unlike almost all other preferred shares trading in the market right now, they are not callable by the company.  Instead, they’re convertible.  They come with a broad conversion option for the shareholder, and a limited conversion option for the company.  For the shareholder, she can convert each share into 6.38 shares of Wells Fargo common stock at any time and for any reason.  For the company, if the common shares of Wells Fargo appreciate substantially, it can force that conversion to occur.  More specifically, if Wells Fargo common shares, currently priced around $58, exceed a market price of $203.8 (technically: $1000/6.83 * 130%) for 20 days in any 30 day consecutive trading period, then the company can force each preferred share to be converted into common at a 6.38 ratio.  If that were to happen, shareholders would get 6.38 common shares, each worth $203.8 in the market, which amounts to a total market value per share of $1300, 130% of par.

It goes without saying that the company is unlikely to be able to convert the shares and get out of the deal any time soon.  The market price of Wells Fargo common stock would need to more than triple from its current peak-cycle level.  Even if we make optimistic assumptions about the future price growth of such a huge bank–say, 6% per year from current levels–a tripling will take at least another twenty years to occur.  That’s great news for owners of the preferred shares–it means that they can expect to receive a tax-advantaged 6.15% yield for at least another 20 years.  Additionally, if or when the conversion price is eventually reached, it’s not going to create a loss for current buyers.  It’s actually going to create a small gain, because the shares are currently trading at a price below the $1300 that they would effectively be converted into monetarily.

The second security is a Bank of America 7.25% Series L convertible preferred issue (prospectus), ticker $BAC-L, or $BAC-PL, or $BAC/PRL.  Like the Wells Fargo shares, these shares were issued in the early months of ’08, at a time when funding for financial institutions was become increasingly tight.  In terms of their structure, they’re essentially identical to the Wells Fargo Series L shares, except for the numeric details, shown below:

wfc bac table

Now, let’s look more closely at the risk-reward proposition in the shares at current prices.

In terms of reward, the investor will earn a tax-advantaged 6.15% yield (tax-equivalent: 7.26% for 28% earners, 8.28% for top bracket earners) for some unspecified number of years, potentially up to infinity, plus a one-time 5% to 10% gain on a potential conversion decades out.  Importantly, because the shares are not callable, they offer the potential for substantial price appreciation–as might occur, for example, if long-term interest rates fall, or if the market discovers additional value in the shares and re-rates their prices.  Note that the vast majority of preferred shares in the market are callable, and therefore do not offer investors the same price appreciation potential.  As their prices rise, their implied call losses rise, causing their YTWs to quickly drop.

In terms of risk, the shares carry the same risk that any security carries, which is the risk that the market price might fall, for any number of reasons, the most basic of which would be more selling than buying.

Thinking about the risk in fundamental terms, the shares carry the credit risk that Wells Fargo or Bank of America will not be able to pay the promised preferred dividends.  In quantifying that risk, Moody’s and S&P have given Wells Fargo preferred shares a BBB rating and a Baa2 rating, respectively, and Bank of America preferred shares a BB+ and Ba2 rating, respectively.  Note that these ratings are distinct from the credit ratings of the debt securities of these banks, which obviously have a higher rating.

Personally, I believe the credit risk in the preferred shares of any large money center bank to be very low, for the reasons already stated.  But to gauge that risk, we don’t need to rely on speculation.  We have an actual historical test case that we can examine: the financial crisis of ’08, which represented the epitome of a worst-case scenario.  Notably, the two securities existed before the worst of the ’08 crisis unfolded.  Both came through it in perfect health, with all promised dividends paid.  Like everything else in the sector, the securities suffered large price drops, but their prices fully recovered.

bac wfc chart

In addition to credit risk, the shares carry the same risk that any long-term fixed income security carries, which is the risk that long-term interest rates will meaningfully rise, forcing prices to adjust downward to create competitive yields.  But these securities, at their current prices, offer three features that can help mitigate that risk, at least partially.

  • First, at 6.15% (tax-equivalent: 7.26%8.28%), their yields and YTWs are already very high, higher than essentially any other similarly rated fixed income security in the market.  Conceivably, in a rising rate environment, their prices won’t need to fall by as much in order for their yields to get in line with other opportunities.
  • Second, if their prices do end up falling over time, they’ll be accumulating a healthy 6.15% yield during the process, helping to offset the losses.  That’s much more than the 2.5% to 3% that long-term treasuries will be accumulating.
  • Third, as discussed earlier, increases in long-term interest rates will tend to increase the profitability of Wells Fargo and Bank of America.  The realization of interest rate risk in the shares will therefore have the counterbalancing effect of reducing their credit risk.  Granted, the market might not see the shares as carrying any meaningful credit risk right now, and therefore the credit risk “relief” that comes with improved profitability might not help prices very much.  But if the shares do not carry any meaningful credit risk, then why are they trading at a yield of 6.15% (tax-equivalent: 7.26%, 8.28%)? Is that the kind of yield that risk-free securities normally trade at in this market? Obviously not.

Another risk worth mentioning is the risk of forced liquidation.  When you buy a preferred security above par, and the underlying company is forced to liquidate, the most you can hope to recover in the liquidation is par, $1000.  Buyers at current prices would therefore incur a loss on forced liquidation down to that level.  Personally, I don’t see the forced liquidation of either these banks as representing a realistic scenario.

$WFC-L and $BAC-L: Understanding the Valuation Anomaly

With the risks and rewards identified, we can now look more closely at the valuations of the shares.  Currently, $WFC-L and $BAC-L are offering current yields and YTWs of 6.15% (the yields and YTWs are the same).  That’s 62 bps higher than the 5.53% average current yield of preferred stocks as a group, and 137 bps higher than the 4.78% average YTW of preferred stocks as a group.  Unlike the vast majority of shares in the preferred space, however, $WFC-L and $BAC-L aren’t callable, which gives them upside potential that the rest of the space lacks. That difference should cause them to trade at lower yields than the rest of the space–yet we find them trading at higher ones.

Ultimately, there’s no way to make sense of the higher yields.  They represent a plain market inefficiency.  For conclusive proof of that inefficiency, we can compare the valuations of $WFC-L and $BAC-L to the valuations of other preferred shares from the same issuers.  In an efficient market, absent relevant differences in the structures of the shares, the valuations should all be roughly the same, given that the shares represent claims on the same company and rank on par with each other.  But that’s not what we’re going to find–they’re all different.

The following table shows all of the currently outstanding fixed rate preferred stock issues of Wells Fargo, each of which ranks on par with every other.  Prices are intraday as of February 27, 2017:

wfc issues

As you can see in the above chart, we find the same inefficiencies within the space of Wells Fargo shares.  $WFC-L is offering a higher YTW than all of the other issues, and a higher current yield than every other issue except for $WFC-J (a legacy Wachovia issue that has an 8% coupon and that’s essentially guaranteed to be called in at the end of this year, given its high cost to the company–it therefore deserves to be treated differently).  Instead of trading at a higher yield than the rest of the space, $WFC-L should be trading at a lower yield, because it’s the only security that’s non-callable, and therefore the only security that has the potential to reward shareholders with a long-term stream of future dividends, as well as meaningful price appreciation as interest rates fall.

Getting inside the head of the market here, I would guess that the thought process being used to justify lower yields for the other shares looks something like this.  The other shares can be called, therefore they have lower effective durations, therefore they deserve to trade at lower yields.  But this logic misses the crucial fact that the call option belongs to the company, not to the shareholder.  It’s only going to be used if using it is in the company’s interests, which is to say, if using it is counter to the interests of the shareholder, the company’s counterparty.  There is no scenario in which the existence of the call option will ever be capable of increasing the value of the shares, just as there’s no scenario in which giving someone else a free call option on positions you own could ever make those positions more valuable.

It’s true that in a falling rate scenario, the duration of a callable security will go down.  But that’s precisely the kind of scenario where an investor will want to be owning longer-duration securities–securities like $WFC-L that provide a guaranteed stream of dividends out into the future and that are therefore capable of appreciating meaningfully in price.

To see the inefficiency more clearly, let’s compare $WFC-L to $WFC-O.  We see the from that table that $WFC-O is offering a 5.23% yield, almost 100 bps lower than $WFC-L’s 6.15% yield. It has a coupon yield of only 5.13%, which is at the absolute low end of what Wells Fargo has been able to issue over the last several years.  Because it’s extremely cheap for Wells Fargo to finance, it’s unlikely to ever get called in.  The market agrees, which is why it’s trading below par, despite having a call date only 6 months away. Because it’s unlikely to ever get called, we can treat it as a perpetual security.  Is 5.23% an appropriate yield for a perpetual security?  Maybe, but not with the equally-ranked WFC-L, also a perpetual security, yielding 6.15%!

Now, assume that over the next several years, interest rates go down, breaking below the lows of last summer.  $WFC-O will not be able to appreciate in such a scenario because the call option will already be exerciseable.  Any move above par ($25) will expose buyers to immediate call losses.  Moreover, the company will want to call the security in, because it will be able to refinance at better yields in the market.  The situation with respect to $WFC-L, however, is different.  It is set to pay a solid stream of sizeable dividends decades out into the future.  Given the lack of a call feature, unless the common stock triples, there’s nothing that the company can do to get out of paying those dividends.  For that reason, $WFC-L has a full runway on which to appreciate in price should interest rates come down.  So while $WFC-O would be stuck at $25 in the scenario, $WFC-L would be able to rise by several hundred points, if not more.

To summarize, then, you have a perpetual security that’s offering a contingent (callable) dividend stream with no price appreciation potential ($WFC-O) trading at a yield almost 100 bps lower than a perpetual security with an equal ranking from the exact same issuer ($WFC-L) that’s offering a guaranteed (non-callable) dividend stream with substantial price appreciation potential.  If you’re looking for a case study to disprove the efficient market hypothesis, you have one right there.

Moving on to Bank of America, the following table shows the company’s currently outstanding fixed rate preferred stock issues, each of which ranks on par with every other:

bac issues

Again, we see the same types of inefficiencies.  BAC-L has the highest YTWs, even though, as a non-callable security, it deserves to have the lowest.

Now, as considerate value investors, we need to ask the question.  Why are $WFC-L and $BAC-L priced so much more attractively than the rest of the preferred share market?  Are we missing something?

The simple answer to the question is that the market for preferred shares contains a large cohort of unsophisticated investors.  For that reason, it frequently produces mispricings. In fairness, for all we know, common equity markets–i.e., the regular stock market–may also produce frequent mispricings.  But the mispricings would be much harder to conclusively prove, given that there are so many confounding variables associated with that type of investing.

To illustrate, ask yourself, right now, is $AMZN mispriced relative to $AAPL?  You can’t know for sure, because you don’t have a reliable way to estimate the likely future cash flows of either company, nor a reliable way to quantify the many risks associated with those cash flows. The question therefore can’t be resolved, except in hindsight, at which point an efficient market guru can easily say, “The market was correctly pricing the securities based on the available information at the time.  Hindsight obviously changes the picture.”

With preferred shares, however, we can look at par securities from the exact same issuer, and watch them trade at substantially different yields.  Without a justification somewhere in the structure of either security, the mispricings become undeniable.  Unfortunately, mispricings in the preferred space cannot be readily corrected through arbitrage (i.e., buying the underpriced shares and shorting the overpriced shares) because the borrow costs on the overpriced shares tend to be prohibitively high.  The borrow cost on $WFC-O, for example, is between 10% and 11% annualized, so if you wanted to collect 100 bps annually by shorting $WFC-O and going long $WFC-L, the carrying cost of the trade will end up being 10X that amount.


Now, back to the unanswered question of why the market is currently mispricing these securities.  I can think of at least three possible reasons:

Reason 1: Callability Neglect and Par Anchoring

As I insinuated earlier, not everyone participating in the preferred share space understands or properly accounts for the impact of callability.  There are uninformed investors who will buy based simply on seeing a high yield, ignoring considerations related to callability.  As evidence of that claim, consider two interesting debacles that occured last year in the bank preferred space–the case of Merrill Lynch 6.45% trust preferreds, ticker $MER-M, and the case of Bank of America 6% trust preferreds, ticker $BAC-Z.  Last summer, both of these high-cost shares were lazily trading well above their par values, even though they had become callable.  The excess in price over par was far greater than any individual future dividend payment could have made up for, yet investors in the space were willing coming in each day and buying them.  When the shares did get called in, the net result was bloodshed:



So there you have one likely explanation for why investors might be mispricing the securities–they aren’t paying attention.  They go in, for example, and pick $BAC-I over $BAC-L simply because it offers a higher yield, never mind the fact that it becomes callable in a few months and has a current YTW that’s negative.

Another likely explanation is that there are investors that wrongly interpret callability to be a beneficial feature of preferreds, a feature that lowers duration and reduces interest rate risk.  Because $WFC-L is not callable, it’s conceptualized as having a higher duration and as being more exposed to interest rate risk.  But that’s completely wrong.  $WFC-L is no more exposed to interest rate risk than any of the other callable securities (with the limited exception of $WFC-J, which is all but guaranteed to be called in, given its 8% cost to the company).  As I emphasized earlier, callability doesn’t protect investors from rising rates because securities don’t get called in when rates are rising (i.e., when corporate financing costs are going up).  They get called in when rates are falling (i.e., when corporate financing costs are going down), which is precisely when an investor will not want them to be called in.

We can imagine an unsophisticated investor thinking to himself–“This stock has a call date 3 yrs from now, which isn’t very far away.  There’s a decent chance I’ll get my back then, regardless of what Mr. Market decides to do.  It’s not a 100 year security, so it’s not like I’m going to be stuck holding it forever.”  The problem, of course, is that the security is going to turn into a 100 year security in exactly the kinds of situations where the investor will wish it was a 3 year security.  And it’s going to shift back into a 3 year security in exactly the kinds of situations where the investor will wish it was a 100 year security.  The investor does not own the option, and therefore the investor should not expect to receive any benefit from it.

On a similar note, I’ve noticed that when the price of a security–say, $WFC-O–trades steadily near or below par, investors tend become more comfortable with it, even when they shouldn’t be.  It’s as if they anchor to “par” as a reliable standard of normalcy, fairness, appropriateness–a price that can be trusted.  This tendency may help explain why $WFC-L trades so cheaply relative to other $WFC issues.  To trade at a fair price relative to the rest of the space, it would have to trade upwards of 50% above par, at $1500, a price that feels excessive, even though it could easily be justified on the basis of relative value.

To be clear, par may be a normal, fair, appropriate, trustworthy price for a security on the date of its issuance–it’s the price, after all, that other presumably intelligent people agreed to pay when they made the initial investment.  But once that date passes, and conditions change, the question of how close or far a given price is to or from it is entirely irrelevant.

Reason 2: Large Outstanding Market Supply

The point I’m going to make here is more complex, but also more interesting, so you’re going to have to be patient and bear with me.

All else equal, a larger outstanding market supply of a security (price times share count) will tend to put downward pressure on its price.  This fact helps explain why $WFC-L and $BAC-L trade so cheaply on a relative basis.  As shown in the earlier tables, their outstanding market supplies–measured in dollar terms at $5B and $7B, respectively–are extremely large relative to the outstanding market supplies of the other preferred issues.

To understand why supply matters, recall that as you increase the outstanding market supply of a security–for example, by issuing large quantities of new shares–you are necessarily increasing the total “amount” of the security floating around in the market, and therefore the total “amount” sitting in investor portfolios, because every issued share has to it someone’s portfolio at all times.  Trivially, by increasing the total “amount” of the security contained in investor portfolios, you are also increasing the total “amount” of it that investors will randomly attempt to sell in any given period of time (and here the selling can be for any reason: because the investor is concerned, because a better investment has been found, because the cash is needed to fund some other activity–whatever, it doesn’t matter).  The point is strictly intuitive–more outstanding units of a security in investor portfolios means more units that get randomly sold every day, and every hour, and every minute, as investors move their portfolios around in response to their own whims and fancies.

That selling is a flow quantity, so we refer to it as attempted selling flow; all else equal, it increases whenever supply increases.  Now, it’s a truism of markets that, for a price equilibrium to be reached, attempted buying flow in a security has to match attempted selling flow in that security.  If attempted buying flow is less than attempted selling flow, prices will not stay put.  They will get pushed lower.

So ask yourself this question.  As the outstanding market supply of a security goes up, and therefore as the amount of attempted selling flow in that security goes up, does the amount of attempted buying flow also go up–automatically, spontaneously, simply to stay even with what’s happening elsewhere?  No.  There’s no reason for attempted buying flow to go up simply in response to an increase in the outstanding market supply of a security. But that flow has to go up, otherwise the attempted flows will not match, and prices will fall.  So what happens?  Prices fall.  The security trades more cheaply.  By trading more cheaply, it draws in interest from investors, resulting in an increase in attempted buying flow to match the increased attempted selling flow and return the price to an equilibrium at some new lower level.

Empirically, we see that big behemoth companies, with large supplies of market equity for investors to hold in their portfolios, tend to trade more cheaply than smaller ones, all else equal.  You can look look at $AAPL, with its massive $716B market value, as a good example–investors lament its cheapness all the time: “It has a P/E of 10 ex-cash!”  But why do big behemoth companies like $AAPL trade more cheaply?  Some would say it’s because their potential future growth is constrained–but that can’t be the only reason.  In my view, a significant contributor is the sheer size of their market capitalizations, the enormous outstanding dollar amounts of their equity that investors have to willingly take into portfolios.  The the interest to do that–i.e., take in that large supply of equity as a position in a portfolio–isn’t always going to be there, which is why big behemoths sometimes have to become cheap, so that they they can attract more willing buyers and more willing owners.

As a company like $AAPL grows in market capitalization, it becomes a larger and larger portion of investor portfolios.  A larger dollar amount of it is attempted to be sold every day. But does the growing market capitalization also cause a larger amount of it to be attempted to be bought every day?  No.  The buy side of the equation isn’t affected by supply–it has no way to know that supply is increasing.  And so the security sees more selling demand than buying demand, until it becomes cheap enough to attract sufficient buying demand to correct the imbalance.  That’s the dynamic.  Now, to be fair, as a company like $AAPL grows in size, reaching ever higher market capitalizations, it will typically become more popular, more talked about, more visible to market participants, and so on.  More people will hear about it, know about it, and therefore more people will wake up in the morning and randomly decide “Hey, I want to own that stock.” That effect–the increase in popularity and visibility that occurs alongside equity growth–can bring with it an increase in attempted buying flow, an increase that can help quench the increased attempted selling flow that will naturally arise out of the increased market supply of the equity.  If that happens, the company, as a big behemoth, may not need to become as cheap, or cheap at all.

But when we’re talking about two obscure preferred stock issues that very few people even know about, preferred stock issues that didn’t arrive at their current market sizes through the growth of an underlying business, but that were instead dumped on the market en masse during a crisis-era fundraising effort, their attempted buying flow isn’t going to be able to rise to the necessary levels in the same way, i.e., by an increase in popularity or visibility or whatever else comes with growth.  The only way they’ll see sufficient attempted buying flow to match the large attempted selling flow that they’ll naturally face is if they trade more cheaply, cheap enough to come up more frequently on value screens and attract attention from bargain-seeking investors.  And that’s exactly how we see $WFC-L and $BAC-L trade–cheap enough to draw interest from those looking for a deal.

For a fascinating illustration of supply effects working in the other direction, driving prices to irrationally high levels, consider the curious case of Kansas City Southern non-cumulative 4% preferred shares, which have a par value of $25 and trade on the NYSE as $KSU-, $KSU-P or $KSU/PR.  A total of 649,736 of the shares were issued in a $16.24MM IPO that took place in November of 1962, at a time when preferred shares tended to trade at very low yields.  Shortly thereafter, around 400,000 of the shares were eliminated in a tender offer, leaving 242,170 shares leftover.  At par, those shares represent a total dollar market supply of $6.05MM–a tiny supply by any measure.  Because they were issued with no call feature, they still trade in the market to this day.  They have no way to leave the market, because they don’t mature, convert, or have a call feature.

wagnerNow, recall that unlike common dividends, preferred share dividends and preferred share prices don’t have the potential to grow with the economy over time.  Consequently, without new issuance, their outstanding market supplies (price times share count) can’t grow with the economy over time.  For that reason, the market supply of $KSU preferreds has stayed constant at roughly $6.05MM for over 50 years , even as the market value of the rest of the asset universe, to include the economy’s money supply, has increased by a factor more than 30X.  The end result is that $KSU preferreds have become the “Honus Wagner T206” of the preferred share market.  They are unique preferred shares that, through a quirk, have been rendered incredibly scarce. Their scarcity causes them to trade at irrationally high prices.

One would think that in the current environment, at only a 4% coupon, the shares would trade significantly below par.  But, for reasons that make no rational sense whatsoever, they trade at a strong premium to par, at price of 28.75 and a yield of 3.45%.  For perspective, that’s only 45 bps above 30-yr treasuries, for a perpetual fixed income security that’s subordinate to BBB- rated corporate debt!


So you have $WFC-L preferreds, rated BBB, offering a yield of 6.15%, and then you have $KSU- preferreds, with no rating, issued by a company whose senior debt is rated BBB-, offering a yield of 3.45%, 260 bps lower–in the same market, on the same exchange.  The clearest available explanation for this crazy price outcome is supply: the total dollar amount of market value in $WFC-L, an amount that has to find a home in someone’s portfolio at all times, is roughly 1000X larger than the total amount of $KSU- to be held. Granted, there could be other unmentioned forces at work–$KSU- might have a few very large shareholders who refuse to sell and who contribute to an even tighter shortage. But those forces are highly-likely to somehow involve supply considerations as well, given that no fundamental information about the shares could justify such a ridiculous valuation.

Reason 3: $1000 Par Value, Convertibility, Index Exclusion

A third reason for the potential cheapness of $WFC-L and $BAC-L is the fact that the shares exhibit unique properties that make them less likely to see buying demand from the usual sources.  The shares trade at an unusually high par value, $1000, versus the normal $25.  They have a complicated conversion-to-common feature, one that can be difficult to clearly decipher from the legalese in the prospectus.  These factors might steer certain investors away from them–in particular, income-seeking retail investors, who constitute a significant portion of the preferred market.

More importantly, because of their differences from normal preferred securities ($1000 par, convertible, etc.), they are excluded from most preferred stock indices.  As a consequence, you don’t see them represented in popular preferred stock ETFs.  $PSK, $PGX, and $PGF, for example, all exclude them.  I’ve gone through all of the popular preferred stock mutual funds, and at least as of last year, only one of them owned either of the two shares–in that case, it was $WFC-L.  The largest preferred ETF index–$PFF–also owns $WFC-L, but doesn’t own $BAC-L.  Note that if it did own $BAC-L at the right index ratio, the shares would be roughly a 4.25% position, the largest in the index, given the large market supply of $BAC-L outstanding.

When we consider these three factors together–first, the possibility that investors might be ignoring the call feature or misinterpreting it as some kind of a duration-related advantage, second, the fact that, in relative terms, there’s a very large outstanding market supply of the securities to be held, weighing down on their prices, and third, the fact that the securities have unique features that make them less likely to see interest from the usual sources of preferred buying interest–the depressed valuations start to make more sense.  Admittedly, there’s no obvious catalyst to remove the factors and lift the valuation–but no catalyst is needed.  Investors can earn an attractive return in the securities by simply owning them and collecting their outsized yields, paying no mind to whether the market price ever catches up.


In conclusion, preferred stocks are reasonably valued relative to the rest of the market and relative to their own past history, especially when their special tax advantages are taken into consideration.  Within the preferred stock space, two unique securities–$WFC-L and $BAC-L–represent very attractive value propositions: 6.15% yields (tax-equivalent to 7.26% and 8.28% for 28% and top bracket earners, respectively), very little credit risk, no call risk, meaningful price appreciation potential, and decades worth of dividends still to pay.  In an investment environment like the one we’re living in, where almost everything looks terrible, you have to take what you can get.

Disclosure: I am long $WFC-L and $BAC-L.  Nothing in this piece should be interpreted as a recommendation to buy or sell any security.  I make no warranty as to the accuracy or completeness of any of the information or analysis presented.

Posted in Uncategorized | Comments Off on A Value Opportunity in Preferred Stocks

Asset Markets as Banks


Let’s suppose that you have money on deposit in a bank, in some kind of checking or savings account.  It’s paying you 2.5% per year, which isn’t something you can easily get in 2017, but something that would have been possible several years ago.  As a friend with lots of advice to give, I come up to you and strike up the following conversation:

Me: “Why are you tying up your money in a bank for such a low return?”

You: “But I’m not tying it up.  I can still use it if I need to.  I may have to pay a penalty, but it’s still there for me to access.”

Me: “Oh no, it’s not there.”

You: “How is it not there?”

Me: “The bank loans it out to people.  So it’s not there for you to access.  They keep a small portion on reserve that they can give out in case people want to withdraw money, but if there’s ever a situation where a sufficient number of people lose confidence in the bank and try to get their money out at the same time, the money’s not going to be there.  You’re going to be screwed.”

You: “Well, what should I do instead?”

Me: “You should keep the money in a safe.  When you keep it in a bank, you’re taking on risk for a paltry potential return.  That’s stupid.”

Let’s neglect for a moment any potential banking system misconceptions revealed in this conversation.  The question I want to ask is: does it make sense, for reasons of convenience and for the potential to earn a few hundred additional basis points of interest, to  keep money in a bank rather than in a personal safe?  Assuming the bank is soundly managed and has a fundamentally solvent balance sheet, the only risk to your money is the possibility that everyone might rush to take money out of it at the same time.  There’s a network of confidence that buffers against that possibility.  Nobody expects people to panic and try to take money out, therefore a people don’t panic and try to take money out, and the system holds up.  Assuming there’s strength and stability in the network of confidence, it can make perfect sense to opt for the added convenience and the extra 2.5% over cash.

In our modernized banking system, this point goes even farther.  The network of confidence is dramatically strengthened by the fact that there’s government insurance on deposits, and also by the fact that there’s a central bank with a charter to provide liquidity to solvent institutions that need it.  There’s essentially no possibility that a financially sound bank could ever be destroyed by a bank run.  And so if your choice is to keep money in a safe or earn 2.5% at such a bank, you should always choose the bank option.

There are valuable parallels here to asset markets, particularly in environments like the current one where short-term rates are expected to remain very low over the long-term.  I’m going to explain those parallels in a bit, but before I do that let me first clarify some concepts that I’m going to make reference: financial asset and intrinsic value.

A financial asset is an entity that pays out a stream of cash flows to the owner over time.  The intrinsic value of a financial asset is the maximum price that an investor would be willing to pay to own the stream if she enjoyed no liquidity in owning it–that is, if she were required to hold it for the entirety of its life, and couldn’t ever take her money out of it by selling it to someone else.  To illustrate the concept, consider a share of the S&P 500.  In essence, each share is a claim on a dividend stream backed by the earnings of 500 elite U.S. companies.  The stream grows in real terms over time because some of the earnings are retained to fund acquisitions and business expansions, which increase the cash flows and dividends that can be paid out in future periods.  Last year, each share of the S&P 500 paid out around $45 in dividends.  Next year, the number might be $46, the year after that, maybe $47, and so on.  There will be sudden drops now and then, but the general trend is upward.

Obviously, estimates of the intrinsic value of a given security will be different for different investors.  A useful way to estimate that value for a security you own is to ask yourself the question: what is the most you would be willing to pay for the security if you couldn’t ever sell it?  Take the S&P 500 with its $45 dividend that grows at some pace over the long-term–say, 2% real, plus or minus profit-related uncertainty.  What is the most that you would be willing to pay to own a share of the S&P 500, assuming you would be stuck owning it forever?  Put differently, at what ratio would you be willing to permanently convert your present money, which you can use right now to purchase anything you want, including other assets, into a slowly accumulating dividend stream that you cannot use to make purchases, at least not until the individual dividends are received?

When I poll people on that question, I get very bearish answers.  By and large, I find that people would be unwilling to own the current S&P 500 for any yield below 5%, which corresponds to a S&P 500 price of at most 1000.  The actual S&P trades at roughly 2365, which should tell you how much liquidity–i.e., the ability to take out the money that you put into an investment–matters to investors.  In the case of the S&P 500, it represents more than half of the asset’s realized market value.

Now, here’s where the parallel to banking comes into play.  As with a bank, a market’s liquidity is backed by a network of confidence among its participants.  Participants trust that there will be other participants willing to buy at prices near or above the current price, and therefore they themselves are willing to buy, confident that they will not lose access to their money for any sustained period of time.   Their buying, in turn, supports the market’s pricing and creates an observable outcome–price stability–that reinforces trust in it. Because the investors don’t all rush for the exits at the same time, they don’t have a need to rush for the exits.  They can rationally collect the excess returns that the market is offering, even though those returns would be insufficient to cover the cost of lost liquidity.

When the network of confidence breaks down, you end up with a situation where people are holding securities, nervous about a possible loss of access to their money, while prevailing prices are still way above intrinsic value, i.e., way above the prices that they would demand in order to compensate for a loss of liquidity. So they sell whatever they can, driving prices lower and lower, until confidence in a new price level re-emerges. Prices rarely go all the way down to intrinsic value, but when they do, investors end up with generational buying opportunities.

Recall that in our earlier example, you have two options.  You can hold your money in a safe, or you can hold it in a bank.  The safe gives you absolute security–no possibility of ever losing access to the money.  The bank gives you a 2.5% extra return, plus convenience, all in exchange for risk to your access.  Granted, you can get your money out of the bank whenever you want–but only if the network of confidence that backs its liquidity remains intact.  Because you believe that the network of confidence will remain intact, you choose the convenience and the added return.  Our modernized banking system simplifies the choice dramatically by externally bolstering the network through the use of mutual insurance and the designation of a lender of last resort.  And so there’s not even a question as to whether you should take the convenience and additional 2.5% return that the bank is offering.  You should take any extra return at all, all the way down to zero, because there’s essentially no risk that the network that backs your access to the money will ever break down.

Investors face a similar choice.  They can hold their money in cash, and earn a low return–in the current case, 0%–or they can purchase an asset.  The cash gives them absolute, unrestricted access to their money at all times, whereas the asset gives them imperfect access, access that’s contingent, at least in part, on the sustained preferences and expectations of other investors.  In compensation for that risk, they get an extra return, often a large extra return.

The question comes up: in a low rate world, with assets at historically high valuations, offering historically low returns, what should investors do?  Should they opt to own assets, or should they hold cash?  The point I want to make in all of this is that to answer the question, we need to gauge the likely strength and sustainability of the market’s network of confidence amid those stipulated conditions.  We need to ask ourselves whether investors are likely to remain willing to buy at the high valuations and low implied returns that they’ve been buying at.  If the conclusion is that they will remain willing, then it makes all the sense in the world to buy assets and continue to own them.  And if the conclusion is that they won’t remain willing, that something will change, then it makes all the sense in the world to choose hold cash instead.

If we’re living in a low-rate world, and our only option other than holding cash is to buy the S&P at 30 times earnings, or a 30 year treasury at 2%, or whatever other shitty deal is on offer, and you ask me what we should do, I can only answer the question by asking whether there will continue to be a ready supply of buyers at those valuations into the future.  And the point is, regardless of what “historical averages” have to say about the matter, there may continue to be!  As always, path is crucial.  If valuations have arrived at their current levels through short-term excitement and mania, then we should be more suspicious of their forward-looking sustainability.  The network of confidence sustaining those valuations is likely to be fickle and to eventually break down.  But if prices have gradually moved to where they are over a long period of time, in response to legitimate secular market forces and conditions, if participants have had sufficient time to grow accustomed to them, to psychologically anchor to them, such that they see them as normal and appropriate, then the basis for questioning their sustainability isn’t going to be as strong.

It’s important to remember that as long as cash is yielding zero or something very low, there’s no arbitrage to force asset prices lower, no dynamic to force them to conform to some historically observed level or average.  They can go as high as they want to, and stay as high as they want to, provided investors are able to develop and retain the confidence to buy at those levels.  Note that the same point doesn’t hold as readily in the other direction, when considering how low prices can go.  That’s because financial assets have intrinsic value.  Below that value, they’re worth owning purely for their cash flow streams, regardless of the prices at which they can be sold.  The market can take those prices all the way to down to zero, they’ll still be worth owning as incoming cash flow streams.

People won’t like to hear this, but in the same way that policymakers have introduced structures and practices into the banking system designed to bolster the networks of confidence that sustain banking liquidity, policymakers are capable of introducing structures and practices into financial markets that bolster the networks of confidence that sustain market liquidity.  For example, in order to prevent sharp drops that would otherwise be economically harmful, policymakers can use public money to buy equities themselves, providing a direct backstop.  Or if that’s not an option legally, they can talk up financial markets, accompanying the talk with whatever policy tools market participants find compelling.  If it’s the case, as some argue, that policymaker approaches around the world are evolving in that direction, then that provides yet another basis for valuations to get pushed higher, just as it provided a basis in our earlier example for a depositor to keep money in a bank despite being paid a paltry rate.

It’s often said that bank solvency is an unhelpful concept, given that a bank’s ability to survive is often determined more by its liquidity condition than by anything happening on its balance sheet.  Every bank can survive a solvency crisis if given enough liquidity, and every bank can be put into a solvency crisis if starved of enough liquidity.  Some would argue, for example, that Lehman failed not because it was truly insolvent, if that even means anything, but because the Fed, right or wrong, refused to lend to it when no one else would.  It couldn’t survive the crunch it faced, so it folded.  In hindsight, we conclude that it was insolvent.  But was it?  It’s something of a stretch, but we can find an analogy here to stock market valuation.  Every stock market, in hindsight, is seen as having been “expensive” or in a “bubble” when the network of confidence that holds it together breaks down, i.e., when people panic and sell out of it, driving it sharply lower.  And every stock market, in hindsight, is seen as “fairly valued” when it suffers no panic and slowly appreciates as it’s supposed to do.

With respect to equity markets in particular, I’ll end with this. If we want to get in front of things that are going to break a market’s network of confidence and undermine people’s beliefs that they’ll be able to sell near or above where they’ve been buying, we shouldn’t be focusing on valuation.  We should be focusing instead on factors and forces that actually do cause panics, that actually do break the networks of confidence that hold markets together.  We should be focusing on conditions and developments in the real economy, in the corporate sector, in the banking system, in the credit markets, and so on, looking for imbalances and vulnerabilities that, when they unwind and unravel, will sour the moods of investors, bring their fears and anxieties to the surface, and cause them to question the sustainability of prevailing prices, regardless of the valuations at which the process happens to begin.

Posted in Uncategorized | Comments Off on Asset Markets as Banks

The Paradox of Active Management

In this piece, I’m going to introduce a simplified model of a fund market, and then use the model to illustrate certain important concepts related to the impact of the market’s ongoing transition from active to passive management.  Some of the concepts have already been discussed in prior pieces, others are going to be new to this piece.

Consider, then, a hypothetical equity market that consists of shares of 5,000 publicly-traded companies distributed across 1,000 funds: 1 passively-managed index fund, and 999 actively-managed funds.


The market is characterized by the following facts:

  • Share ownership by individuals is not allowed.  The only way to own shares is to invest in a fund, and there are no funds to invest in other the 1,000 funds already specified. All shares in existence are held somewhere inside those funds.
  • The passive fund is required to target a 100% allocation to equities over the long-term, holding shares of each company in relative proportion to the total number of shares in existence.
  • The active funds are required to target a 95% allocation to equities over the long-term. They are free to implement that allocation in whatever way they want–i.e., by holding shares of whatever companies they prefer.  The 95% number is chosen because it leaves the active funds with enough cash to trade, but not so much cash as to appreciably detract from their returns.  Note that from here forward, when we refer to the “returns” of the active funds, we will be referring to the returns of the portion of the funds that are actually invested in the market, not the overall returns of the funds, which will include the returns of a certain amount of cash held for the purposes of liquidity.
  • The passive fund and the group of 999 active funds each represent roughly half of the overall market, a fact represented in the identical sizing of the grey boxes in the schematic above.
  • Each active fund charges an annual management fee of 1%.  The passive fund is publicly-administered and charges no fees.

We can separate the market’s valuation into two dimensions: (1) absolute valuation and (2) relative valuation.

(1) Absolute Valuation: Valuation of the aggregate market relative to cash.

(2) Relative Valuation: Valuation of companies in the market relative to each other.

If we know these two dimensions, then, assuming we know the fundamentals of the underlying companies (earnings, dividends, etc.), we can infer the exact prices of all shares in the market.

Importantly, the two dimensions of the market’s valuation are controlled by two distinct entities:

  • The fund investors control the market’s absolute valuation through their net cash injections and withdrawals (see green and red arrows, respectively).  They cannot control the market’s relative valuation because they cannot trade in individual shares.
  • The active funds control the market’s relative valuation through their buying and selling of individual shares.  They cannot control the market’s absolute valuation because they are not allowed to try to increase or decrease their long-term allocations to equities.

The passive funds control nothing, because they have no freedom in any aspect of their market behaviors.  They must deploy 100% of any funds they receive into equities, and they must buy and sell shares so as to establish positions that are in exact relative proportion to the supply outstanding.

Absolute Valuation: Driven by Net Cash Inflows and Outflows from Investors

Suppose that an investor sends new cash into a fund–either the passive fund or one of the 999 active funds–and that everything else in the system remains unchanged.  The receiving fund will have an allocation target that it will have to follow.  It will therefore have to use the cash to buy shares.  But the receiving fund cannot buy shares unless some other fund sells shares.  That other fund–the selling fund–will also have an allocation target that it will have to follow.  It will therefore have to use the cash from the sale to buy shares from yet another fund, which will have to use the cash from the sale to buy shares from yet another fund, which will have to use the cash from the sale to buy shares from yet another fund, and so on.  Instead of sitting quietly in the hands of the fund that it was injected into, the cash will get tossed around from fund to fund across the market like a hot potato.

How long will the tossing around of the hot potato (cash) last?  At a minimum, it will last until prices rise by an amount sufficient to lift the aggregate equity capitalization of the market to a level that allows all funds to be at their target equity allocations amid the higher absolute amount of cash in the system.  Only then will an equilibrium be possible.

To illustrate, suppose that each fund in the system is required to target an allocation of 95% equity, and 5% cash.  Consistent with that allocation, suppose that there is $95MM of aggregate market equity in the system, and $5MM of aggregate cash. (95MM/$100MM = 95% equity, and $5MM/$100MM = 5% cash, so there’s a sufficient supply of each asset for every fund to satisfy its allocation mandate.)  Suppose that investors then inject $5MM of new cash into the system, raising the total amount of cash to $10MM.  That injection will throw the funds’ allocations out of balance.  As a group, they will find themselves under-invested in equity relative to their targets, and will therefore have to buy shares.  They will have to persist in that effort until prices rise by an amount sufficient to increase the system’s aggregate equity market capitalization to $190MM, which is the only number that will allow every fund to have a 95% allocation to equity amid the higher amount of cash ($10MM) in the system. ($190MM/$200MM = 95% equity, and $10MM/$200MM = 5% cash, leaving a sufficient supply of each asset for every fund to satisfy its allocation mandate.)

When cash is removed from the system, the same process takes place in reverse–prices get pulled down by the selling until the aggregate equity market capitalization falls to a level that allows the funds to be at their allocation targets amid the lower absolute amount of cash in the system.

Now, the process by which investor flows drive valuations in our hypothetical market is subject to the same natural feedbacks seen in any real market.  As prices go up, the market’s valuation and implied future return becomes less attractive, therefore fewer investors send cash in, more investors take cash out, and prices see downward pressure:

Prices Up –> Demand Down –> Prices Down (Negative Feedback)

Conversely, as prices go down, the market’s valuation and implied future return becomes more attractive, therefore more investors send cash in, fewer investors take cash out, and prices see upward pressure:

Prices Down –> Demand Up –> Prices Up (Negative Feedback)

As in any real market, there are situations in which this natural negative feedback can give way to a different kind of positive feedback, where rising prices reflexively lead to greater optimism and confidence, fueling increased buying, decreased selling, and therefore further price increases:

Prices Up –> Demand Up –> Prices Up (More) (Positive Feedback)

…and, conversely, where falling prices reflexively lead to greater pessimism and fear, fueling decreased buying, increased selling, and therefore further price declines:

Prices Down –> Demand Down –> Prices Down (More) (Positive Feedback)

I highlight the details here simply to point out that the feedback processes governing prices in our hypothetical market are no different from the feedback processes that govern prices in real markets.  The only difference is in the artificial “fund” structure that we’ve imposed, a structure that helps us separate out and explore the different components of price formation.

Relative Valuation: Driven by Active Fund Preferences

Active funds are the only entities in the system that have the ability to express preference or aversion for individual shares at specific prices.  They are therefore the only entities in the system with direct control over the valuation of individual shares relative to each other.

If an active fund skillfully arbitrages the prices of individual shares–buying those that are priced to offer high future returns and selling those that are priced to offer low future returns–it will earn a clear micro-level benefit for itself: an excess return over the market. But will its successful arbitrage produce any macro-level benefits for the larger economy?

To answer the question, imagine a society where coins are the primary form of money, and where people generally hold coins in their pockets.  Suppose further that in this society, there are a select group of careless people who fail to buy new pants on a recurring basis, and who therefore end up with holes in their pockets.  As these people walk around the society, they unknowingly drop coins on the floor, leaving coins laying around for other passers-by to pick up and profit from.  A savvy individual recognizes the profit opportunity asssociated with the “mistakes” these coin-droppers are making, and develops a way to skillfully “arbitrage” them.  Specifically, he builds a super-whamodyne metal detector, which he uses to go on sophisticated coin hunts throughout the society.  With this super-whamodyne metal detector, he is able to pick up falling and hidden coins much faster than anyone else, and therefore generates an outsized profit for himself.

Clearly, his coin-hunting activities will generate a micro-level benefit for him.  But, aside from possible street cleanliness (fewer coins laying around?), are there any compelling macro-level benefits that will be generated for the overall society?  No.  Any “profit” that he earns in finding a given coin will be the mirror image of the loss incurred by whoever dropped it, or whoever failed to pick it up in front of him.  His effort will benefit him, but the benefit will always occur alongside corresponding losses or missed gains for others.  The system as a whole will see no net gain.  From a macro-level perspective, the resources expended in the effort to build the super-whamodyne metal detector, and lug it all around the society in search of treasure, will have been completely wasted.

We can think of market arbitrage in the same way.  Some market participants make mistakes.  Other market participants expend vast resources trying to arbitrage those mistakes, with an emphasis on getting their first, in order to capture the profit.  No value is generated in the process; rather, value is simply transferred from the mistake-makers to the arbitrageurs, just as it was transferred from the coin-droppers to the coin-hunter. From a macro-level perspective, the resources expended in the effort end up being wasted.

Now, to be fair, this argument neglects the fact that prices in a market impact capital formation, which in turn impacts an economy’s resource allocation.  When a money-losing, value-destroying business is given an undeservedly high price, it is able to raise capital more easily, and is therefore more able to direct additional economic resources into its money-losing, value-destroying operation, where the resources are likely to be wasted. Conversely, when a profitable, value-generating business is given an undeservedly low price, it is less able to raise capital, and is therefore less able to direct economic resources into its profitable, value-generating operation, where they would otherwise have been put to beneficial economic use.

Personally, I tend to be skeptical of the alleged relationship between equity prices and capital formation.  Corporations rarely fund their investment programs through equity issuance, and so there’s no reason for there to be any meaningful relationship.  This is especially true for the mature companies that make up the majority of the equity market’s capitalization–companies that comprise the vast majority of the portfolio holdings on which active management fees get charged.

To illustrate the point with an example, suppose that the market were to irrationally double the price of Pepsi $PEP, and irrationally halve the price of Coke $KO.  Would the change have any meaningful effect on the real economy?  In a worst case scenario, maybe $PEP would divert excess income away from share buybacks towards dividends, or arbitrage its capital structure by selling equity to buy back debt.  Maybe $KO would do the opposite–divert excess income from away dividends towards share buybacks, or arbitrage its capital structure by selling debt to buy back equity.  Either way, who cares?  What difference would it make to the real economy?  For the shift to impact the real economy, it would have to be the case that money used for share repurchases and dividends and other types of financial engineering is deployed at the direct expense of money used for business investment, which evidence shows is not the case, at least not for large companies such as these.  The companies make the investments in their businesses that they need to make in order to compete and profitably serve their expected future demand opportunities. Whatever funds are left over, they return to their shareholders, or devote to financial arbitrage.

Many investors believe that the current equity market is excessively expensive, having been inflated to an extreme valuation by the Federal Reserve’s easy monetary policy.  Let’s assume that the most vocal of these investors are right, and that stocks in the market are at least twice as expensive as they should be.  The Fed, then, has doubled the market’s valuation–or alternatively, has halved the equity funding costs of corporations.  Ask yourself: is this alleged “distortion” leading to excessive corporate investment?  No, not at all.  If the current economy were experiencing excessive corporate investment, then we would be experiencing an inflationary economic boom right now.  But we’re experiencing nothing of the sort–if anything, we’re experiencing the opposite, a period of slumpy moderation, despite being more than 7 years into an expansion.  That’s because, contrary to the Fed’s better intentions, the transmission mechanism from share prices to real investment is weak.

Absolute and Relative Valuation: Samuelson’s Dictum

The price dynamics seen in our hypothetical market are obviously different from the price dynamics seen in real markets.  In real markets, individual investors are allowed to invest directly in individual shares, which allows them to directly influence relative valuations inside the equity space.  Similarly, in real markets, many of the active funds that invest in equities–for example, hedge funds–are able to significantly vary their net exposures to equities as an asset class. This ability allows them to directly influence the equity market’s absolute valuation.

With that said, there’s probably some truth to the model’s implication.  Individual investors (as well as the first-level custodians that manage their money, e.g., RIAs) probably exert greater control over the market’s absolute valuation.  That’s because they directly control flows into and out of investment vehicles that have no choice but to be fully invested in equities–e.g., active and passive equity mutual funds and ETFs.   Conversely, they probably exert less control over the relative valuation of shares inside the equity market, because they’re less likely to be the ones directly speculating inside that space, opting to rely on the available investment vehicles instead.

In contrast, the professional managers that operate downstream of individual investor flows, and that manage the various investment vehicles that provide those investors with equity exposure, probably exert less control over the market’s absolute valuation.  That’s because when flows come into or go out of their vehicles, they have to buy and sell, which means they have to put the associated buying and selling pressures somewhere into the market.  They cannot opt to “hold” the pressure as a buffer–they have to pass it  on. Conversely, they probably exert greater control over the relative valuation of shares inside the market, given that source investors often step aside and leave the task of making relative trades in the market to them, based on their expertise.

This fact may be the reason for Samuelson’s famous observation that markets are more efficient at the micro-level than at the macro-level.  If micro-level decisions–e.g., decisions about which specific companies in the equity market to own–are more likely to be made by professionals that possess experience and skill in security selection, then we should expect markets to be more efficient at the micro-level.  Conversely, if macro-level decisions–e.g., decisions about what basic asset classes to invest in, whether to be invested in anything at all, i.e., whether to just hold cash, and how much cash to hold–are more likely to be made at the source level, by the unsophisticated individuals that allocate their wealth to various parts of the system, individuals that are in no way inclined to optimize the timing of the flows they introduce, then we should expect markets to be less efficient at the macro-level.

We should note, of course, that the concept of efficiency is far more difficult to make sense of at the macro-level, where the different assets–cash, fixed income, and equities–are orthogonal to each other, i.e., of a totally different kind.  The advantages and disadvantages associated with holding them cannot be easily expressed in each other’s terms.

To illustrate, a share of Google $GOOG and a share of Amazon $AMZN are the same general kind of asset–an equity security, an intrinsically-illiquid stream of potential future dividends paid out of future free cash flows.  Because they are the same general kind of asset, it is easier to express the value of one in terms of the value of the other.  If, at every point into the future, a $GOOG share will generate double the free cash flow of the $AMZN share, then it has double the value, and should be double the price; similarly, if it will generate half the free cash flow, then it obviously has half the value, and should be half the price.

A share of Google $GOOG and a dollar bill, in contrast, are not the same kind of asset–one is an equity security, an intrinsically-illiquid stream of future monetary payments, the other is fully-liquid present money, in hand right now for you to use in whatever way you please.  Because they are not the same kind of asset, there is no easy way to put the two asset types together onto the same plane, no necessary, non-arbitrary ratio that one can cite to express the value that one posseses in terms of the other–e.g., 710 dollars for every $GOOG share.  But that is precisely what it means to “value” them.

The Active Management Fee: Can it Be Earned?

Now, let’s be fair.  In working to establish “correct prices”, active funds in a secondary market do provide macro-level benefits for the economy.  It’s just that the benefits are small, frequently exaggerated in their actual economic impacts.  As compensation for the work they perform in those efforts, the funds charge a fee–in our hypothetical example, the fee was 1% of assets.  To earn that 1% fee, the funds need to outperform the market by 1% before fees.  As a group, is it possible for them to do that?

The temptation is to say no, it is not possible.  The passive fund is holding the market portfolio.  Since the passive fund plus the collection of active funds equals the overall market, it follows that the active funds, collectively, are also holding the market portfolio. Given that the two segments of the market–passive and active–are holding the same portfolios, it’s logically impossible for one segment to outperform the other.  In previous pieces, we called this observation, attributable to William Sharpe, “The Law of Conservation of Alpha.”  Aggregate alpha in a market must always sum to zero.

The Law of Conservation of Alpha seems to leave us no choice but to conclude that the active funds in our hypothetical system will simply underperform the passive fund by the amount of their fees–in the current case, 1%–and that the underperformance will continue forever and ever, never being made up for.  But if that’s the case, then why would any rational investor choose to invest in active funds?

Imagine that there are two asset classes, A and B, and that you have the option of investing in one or the other.  Suppose that you know, with absolute certainty, that asset class B is going to underperform asset class A by 1%.  Knowing that fact, why would you choose to invest in asset class B over asset class A?  Why would you choose to invest in the asset class with the lower expected return?

It makes sense for investors to accept lower returns in exchange for lower amounts of risk. But, in this case, the group of active funds are not offering lower risk in comparison with the passive fund.  They are offering the exact same risk, because they are holding the exact same portfolio.  In fact, there’s a relevant sense in which the active funds, considered individually, are offering additional risk in comparison with the passive fund–specifically, the additional risk of underperforming or outperforming the benchmark.  To be fair, that risk may not be a clear net negative in the same way that volatility is a net negative, but it certainly isn’t a net positive, and therefore it makes no sense for investors to throw away 1% in annual return, every year, year after year, in exchange for the highly dubious “privilege” of taking it on.

What we have in our hypothetical market is an obvious arbitrage–go with the passive fund, and earn an extra 1% per year in expected return, with no strings attached.  As investors become increasingly aware of that arbitrage, we should expect them to shift their investments out of the active funds and into the passive fund, a transition that is taking place in real markets as we speak.  Our intuitions tell us that there should be adverse consequences associated with the transition.  As more and more investors opt to free-ride on a passive approach, pocketing the 1% instead of paying it, we should expect there to be negative impacts on the market’s functioning.

In a previous piece, I argued that there were impacts on the market’s functioning–but that, surprisingly, they were positive impacts.  Counter-intuitively, the transition out of active funds and into passive funds makes the market more efficient in its relative pricing of shares, because it preferentially removes lower-skilled players from the active segment of the market, leaving a higher average level of skill in the remaining pool of market participants to set prices.  I extended the argument to include the impact on retail investors, who, in being persuaded to take on equity market exposures through passive vehicles, rather than by picking individual stocks or active fund managers themselves, were rendered less likely to inject their own lack of skill into the market’s relative pricing mechanism.  Granted, they will be just as likely to distort the market’s absolute valuation through their knee-jerk inflows and outflows into the market as a whole, but at least they will not be exerting additional influences on the market’s relative valuation, where their lack of skill would end up producing additional distortions.

Now, if market participants were to shift to a passive approach in the practice of asset allocation more broadly–that is, if they were to resolve to hold cash, fixed income, and equity from around the globe in relative proportion to the total supplies outstanding–then we would expect to see a similarly positive impact on the market’s absolute pricing mechanism, particularly as unskilled participants choose to take passive approaches with respect to those asset classes in lieu of attempts to “time” them.  But, to be clear, a broader shift to that broader kind of passivity is not currently ongoing.  The only areas where “passive” approaches are increasing in popularity are areas inside specific asset classes–specifically, inside the equity and fixed income markets of the developed world.

Active to Passive: The Emergence of Distortion

Passive investing may improve the market’s efficiency at various incremental phases of the transition, but there are limits to the improvement.  To appreciate those limits, let’s assume that the migration from active to passive in our hypothetical market continues over the long-term, and that the number of active funds in the system ends up shrinking down to a tiny fraction of its initial size.  Whereas the active segment of the market initially consisted of 999 active funds collectively controlling roughly 50% of equity assets, let’s assume that the active segment shrinks down to only 10 funds collectively controlling 0.5% of equity assets.  The other 99.5% of the market migrates into the passive fund.


In evaluating the impact of this shift, it’s important to remember that active investors are the entities that set prices in a market.  Passive investors cannot set prices, first because they do not have any fundamental notion of the correct prices to set, and second because their transactions are forced to occur immediately in order to preserve the passivity of their allocations–they cannot simply lay out desired bids and asks and wait indefinitely for the right prices to come, because the right prices may never come.  To lay out a desired bid and ask, and then wait, is to speculate on the future price, and passive funds don’t do that.  They take whatever price is there.

In the above configuration, then, the tiny segment of the market that remains active–which holds roughly 0.5% of the total equity supply–will have to set prices for all 5,000 securities in the market.  It follows that a much smaller pool of resources will be devoted to doing the “work”–i.e., the fundamental research, the due-diligence, etc.–necessary to set prices correctly.  For that reason, we should expect the configuration to substantially reduce the market’s efficiency, contrary to what I asserted earlier.

In our hypothetical market, a 1% active management fee was initially being levied on 50% of the market’s capitalization, with the proceeds used to fund the cost of due-diligence. After the migration, that 1% fee will be levied on only 0.5% of the market’s capitalization, yielding roughly 1/100 of the proceeds.  The shrunken proceeds will have to pay for the cost of due-diligence on a security universe that hasn’t shrunken at all.  Because the active segment will have a much smaller amount of money to spend on the due-diligence process, a new investor that enters and spends a given amount of money on it in competition with the active segment will be more likely to gain an edge over it.  At the margin, active investors that enter at the margin will be more capable of beating the market, which is precisely what it means for the market to be less efficient.

This thinking is headed in the right direction, but there’s a subtle problem with it.  The best way to see that problem is to trace out the literal process by which the active segment will end up shrinking.  Suppose we return to where we initially started, with 999 active funds and 1 passive funds.  In their efforts to arbitrage the 1% differential in expected returns, investors transfer large sums of money out of the 100 active funds with the worst performance track records, and into the market-performing passive fund, forcing the 100 active funds to shut down.

The 100 active funds that end up shutting down will have to sell their shares to raise cash to redeem their investors.  But who will they sell their shares to?  They might be able to sell some of their shares to the passive fund, because it will be receiving cash inflows, and will need to buy shares.  But they won’t be able to sell all of their shares to the passive fund, because the passive fund will have to buy shares of every company in the market–all 5,000, in proportion to the supply oustanding–many of which the active funds won’t be holding.  The passive funds will therefore have no choice but to buy at least some of their shares from the other 899 active funds that remain.

Working out the implications of the flows, then, the 100 underperforming active funds, in liquidating themselves, will have to sell at least some of their shares to the 899 remaining active funds.  Those remaining active funds will be in a position to buy the shares, because they will have received cash from selling some of their own shares to the passive fund when it went in to buy.  But before the remaining active funds can buy the new shares, they will have to conduct research–due-diligence–to determine the appropriate prices. That due-diligence will cost money.  Where will the money come from?

Unfortunately, there is nowhere for it to come from, because the assets that the remaining active funds will have under management, and therefore the fee revenues that they will be able to earn, will not have increased.  Crucially, in the migration, assets will not be moving from the 100 underperforming active funds to the remaining active funds who will perform the needed fundamental research on the shares being sold–rather, assets will be moving from the 100 underperforming funds to the cheapskate passive fund, which doesn’t spend any money at all on the research process, opting to simply give the money back to its investors instead.  Consequently, the money needed to fund the additional research will not be available.  Unless the money is taken out of some other necessary research activity, or out of the active fund manager’s wages or profits, the research and due-diligence necessary to buy the shares will not get done.

The following two schematics distinguish two types of migrations: a sustainable migration from active fund to active fund, and an unsustainable migration from active fund to passive fund.



In our scenario, the remaining active funds will not have done the research necessary to buy the shares that the underperforming funds will need to sell, and will not get paid any additional money to do that research.  Consequently, they aren’t going to be interested in buying the shares.  But the 100 underperforming active funds have to sell the shares–they have to get cash to redeem their investors.  So what will happen?  The answer: the bid-ask spread will effectively widen.  Prices will be found at which the remaining active funds will be willing to transact–those prices will simply be much lower, to ensure adequate protection for the funds, given that they haven’t done the work necessary to be comfortable with the purchases, or alternatively, given that they need to pay for that work, and that the money has to come from somewhere.

The point articulated here is admittedly cumbersome, and it might seem out of place to think about the process in terms of the need to pay for “research.”  But the point is entirely accurate.  The best way to grasp it is to start from the endgame scenario that we posited, a scenario where active funds shrink down to some absurdly small size–say, 0.5% of the market, with the other 99.5% of the market invested passively.  How do you get to a situation where a measly 0.5% of the market, a tiny group of managers that are only able to draw in a tiny revenue stream out of which to pay for fundamental research, is setting prices–placing bids and asks–on a massive equity universe consisting of 5,000 complicated securities?  The only way you get there is by having bid-ask spreads completely blow out.  If their counterparties are desperate, then yes, the tiny group of active funds will trade in securities that they aren’t familiar with or interested in, and that they haven’t done adequate due-diligence on.  But they will only do so at prices that are sufficient to provide them with extreme margins of safety: ultra-low bids if they have to be the buyers, and ultra-high asks if they have to be the sellers.

In the previous piece on Indexville, we posed the question: what will happen if the active segment of the market becomes too small, or if it goes away completely?  Most people think the answer is that the market will become “inefficient”, priced incorrectly relative to fundamentals, making it easier for new active investors to enter the fray and outperform. But we saw that that’s not exactly the right answer.  The right answer is that the market will become illiquid.  The bid-ask spread will blow out or disappear entirely, making it increasingly costly, or even impossible, for investors–whether passive or active–to transact in the ways that they want to.

The example above takes us to that same conclusion by a different path.  If an active segment with a tiny asset base and tiny fee revenues is left to set prices on a large universe of complicated securities, the bid-ask spreads necessary to get that segment to transact will explode, particularly in those securities that it has not done sufficient research on and that it is not familiar with or comfortable transacting in–which will be most securities, given that a tiny segment of a large market cannot single-handedly do the work of studying and forming a sound fundamental opinion on everything inside it.

Liquidity Provision: A Way to Earn the Fees

We return to the question at the title of the previous section: Can the fees that active managers collectively charge be earned?  The answer is yes.  The fees can be earned out of revenues generated through the provision of liquidity–selling at the ask to those that need to buy, and buying at the bid from those that need to sell.  The excess return over the market equals half the spread between the two, times the volume, divided by the capital employed.  As the active segment of the market shrinks in size, that excess return will increase.  At the same time, the fees extracted by the segment will decrease, bringing the segment closer to a condition in which its fees match its excess returns, which is what it means for the active segment to earn its fees.

The active segment of the market has two external counterparties that it can provide liquidity to: first, the passive segment, which experiences inflows and outflows that it must deploy and redeem, and second, the corporate sector, which sometimes needs to raise equity funding, and which, more frequently in the present era, wants to buy back its own shares.  The total flows of those external counterparties–the total amount of buying and selling that they engage in–will determine the amount of excess return that the active segment can generate in providing liquidity to them, and therefore the maximum fees that it can collectively “earn.”  Any fees that get extracted above that amount will be uncompensated for, taken from investors in exchange for nothing.

If the market were suffering from an inadequate amount of active management, the consequences would become evident in the performance of passive funds.  Passive funds would begin to exhibit increased tracking errors relative to their benchmarks.  Every time they received a cash inflow and attempted to buy shares, they would be forced to buy at the elevated ask prices set by the small number of active funds willing and able to transact with them, ask prices that they would push up through their attempted buying. Conversely, every time they received redemption requests and attempted to sell shares, they would be forced to sell at the depressed bid prices set by the small number of active funds willing to transact with them, bid prices that they would pull down through their attempted selling.  On each round-trip, each buy followed by a sell, they would lodge a tracking loss relative to their indices, the mirror image of which would be the excess profit earned by the active segment in providing them with liquidity.

Now, you might think that liquidity in the market is already provided by specialized market-makers–e.g., computers trading on HFT algorithms–and that active, fundamentally-informed investors are not needed.  But market-makers of that type only provide one small phase of the market’s liquidity–the phase that entails bridging together, over the very short-term, the temporarily divergent flows of participants that are seeking to hold shares for longer periods of time.  Active investors, those that are willing to adjust their demand for shares based on fundamental value, are crucial to the rest of the process, because they are the only entities that are capable of buffering and balancing out longer-term flow imbalances that otherwise emerge in the market–situations, for example, where there is an excess of interested sellers, but no interested buyers, either present or en route, even after the ask price is substantially lowered.  Without the participation of value-responsive active investors in those situations, market-makers would have no buyers to bridge the selling flows against, and would therefore have to widen their spreads, i.e., lower their bids–or even remove them from the market altogether.

Right now, the average tracking error in the average passive fund is imperceptible.  This fact is proof that the current market, in its ongoing migration into passive funds, isn’t even close to suffering from an insufficiency of active management.  With at most 40% of the equity market having gone passive, the point in the transition where tangible market illiquidity will ensue is still very far away.

That’s why, in the previous piece, I argued that the active segment of the market is not even close to being able to earn its fees in the aggregate.  Active managers aren’t doing anything wrong per se, it’s just that the shrinkage they’ve suffered hasn’t yet been extreme enough to undermine the function they provide, or make the provision of that function profitable enough to reimburse the fees charged in providing it.  Granted, they may be able to earn their fees by exploiting “dumb-money” categories that we haven’t modeled in our hypothetical market–e.g., retail investors that choose to conduct uninformed speculation in individual shares, and that leave coins on the floor for skilled managers to pick up behind them–but they aren’t even close to being able to collectively earn their fees via the liquidity they provide to the other segments of the market, which, evidently, are doing just fine.

The Actual Forces Sustaining Active Management

Active managers, in correctly setting prices in the market, provide a necessary benefit to the economy.  In a mature, developed economy like ours, where the need for corporate investment is low, and where the corporate sector is able to finance that need out of its own internal cash flows, the benefit tends to be small.  But it’s still a benefit, a contribution that a society should have to pay for.

Right now, the people that are paying for the benefit are the people that, for whatever reason, choose to invest in the active segment of the market, the segment that does the work necessary to set prices correctly, and that charges a fee for that work.  But why do investors do that?  Why do they invest in the active segment of the market, when they know that doing so will leave them with a lower return on average, in exchange for nothing?

The question would be more apt if active investors were investing in a fund that owned shares of all actively-managed funds–an aggregate fund-of-all-active-funds, if one can envision such a monstrosity.  Investors in such a fund would be giving away the cost of fees in exchange for literally nothinga return that would otherwise be absolutely identical to the passive alternative in every conceivable respect, except for the useless drag of the fees.

But that is not what people that invest in the active segment of the market are actually doing.  Active management is not a group sport; the investors that invest in it are not investing in the “group.”  Rather, they are investing in the individual active managers that they themselves have determined to be uniquely skilled.  It’s true that they pay a fee to do that, but in exchange for that fee, they get the possibility of outperformance–a possibility that they evidently consider to be likely.

Every investor that rationally chooses to invest in the active segment of the market makes the choice on that basis–an expectation of outperformance driven by the apparent skill of the individual active manager that the investor has picked out.  Whereas this choice can make sense in individual cases, it cannot make sense in the average case, because the average of the group will always be just that–average in performance, i.e., not worth extra fees.  In choosing to invest in the active segment, then, active investors are choosing, as a group, to be the gracious individuals that pay for the cost of having “correct market prices”, in exchange for nothing.  Passive investors are then able to free-ride on that gracious gift.

How, then, is the active segment of the market able to remain so large, particularly in an environment where the fees charged are so high, so much more than the actual cost of doing the fundamental research necessary to have a well-functioning market?  Why don’t more active investors instead choose the passive option, which would allow them to avoid paying the costs of having a well-functioning market, and which would net them a higher average return in the final analysis?

The answer, in my view, is two-fold:

(1) The Powers of Persuasion and Inertia:  For every active manager, there will always be some group of investors somewhere that will be persuaded by her argument that she has skill, and that will be eager to invest with her on the promise of a higher return, even though it is strictly impossible for the aggregate group of managers engaged in that persuasive effort to actually fulfill the promise.  Moreover, absent a strong impetus, many investors will tend to stay where they are, invested in whatever they’ve been invested in–including in active funds that have failed to deliver on that promise.

(Side note:  Did you notice how powerful that shift from the use of “he” to the use of “she” was in the first sentence of the paragraph above?  The idea that the manager that we are talking about here is a female feels “off.”  Moreover, the connotation of deceipt and trickery associated with what the active manager is doing in attempting to convince clients that he has special financial talents and that his fund is going to reliably outperform is significantly reduced by imagining the manager as a female.  That’s evidence of ingrained sexual bias, in both directions).

(2) The Framing Power of Fee Extraction:  Fees in the industry get neglected because they are extracted in a psychologically gentle way.  Rather than being charged as a raw monetary amount, they are charged as a percentage of the amount invested.  Additionally, rather than being charged abruptly, in a shocking one-time individual payment, they are taken out gradually, when no one is looking, in teensy-weensy daily increments. As a result, an investor will end up framing the $10,000 fee she might pay on her $1,000,000 investment not as a literal payment of $10,000 that comes directly out of her own pocket, but rather as a negligible skim-off-the-top of a much larger sum of money, taken out when no one is looking, in miniscule incremental shavings that only accumulate to 1% over the course of a full year.

To illustrate the power that this shift in framing has, imagine what would happen if the DOL, in a follow-up to its recent fiduciary interventions, were to require all annual fees to be paid at the end of each year, by a separate check, paid out of a separate account.  Then, instead of having the 1% fee on your $1,000,000 mutual fund investment quietly extracted in imperceptible increments each day, you would have to cut a $10,000 check at the end of each year–go through the process of writing it out, and handing it over to the manager, in exchange for whatever service he provided.  $10,000 is a lot of money to pay to someone that fails to deliver–even for you, a millionaire!

If the way fees are framed were forcibly modified in this way, investors would become extremely averse to paying them.  The ongoing shrinkage of the market’s active segment–in both its size and its fees–would accelerate dramatically.  The effects of the policy might even be so powerful as to push the market into a state in which an acute scarcity of active management ensues–a situation in which everyone attempts to free-ride on the index, and no one steps up to pay the expenses associated with having a well-functioning market.  If that were to happen, active funds would find themselves capable of generating excess returns from the provision of liquidity that substantially exceed the fees they charge. Investors in active funds, who are the only ones actually paying the cost of that service, would begin to receive a benefit for having paid it, a benefit that would be well-deserved.

Posted in Uncategorized | Comments Off on The Paradox of Active Management

The Value of Active Management: A Journey Into Indexville

The growing popularity of passive investing provokes a series of tough questions:

  • What necessary functions does active management perform in a financial system?
  • What is the optimal amount of active management to have in such a system, to ensure that those functions are carried out?
  • If the size of the market’s active share falls below the optimal level, how will we know?
  • How much can active managers, as a group, reasonably charge for their services, without acting as a net drag on the system?

To answer these questions, I’m going to explore the curious case of Indexville, an economy whose investment sector has been designed to be 100% passive.  As expected, Indexville lacks a number of features that are necessary for the optimal functioning of an economy. I’m going to try to build those features back into it, documenting the cost of doing so each step of the way.  Ultimately, I’m going to run into a problem that ingenuity cannot overcome.  That problem is the exact problem that active management is needed to solve.

A Journey Into Indexville

Indexville is an economy similar in its composition to the United States economy, with one glaring exception: the citizens of Indexville consider active management to be unproductive and immoral, and are therefore invested in Indexville’s corporate sector 100% passively, through a fund called INDX.  The system works as follows:


When the citizens of Indexville earn excess cash, they have two options:

(1) Hold it in the bank, where it will earn a small interest rate set by the government–an interest rate of, say, 1% real.

(2) Purchase new shares of the economy’s sole investment vehicle, an index mutual fund called INDX.

When INDX sells new shares to investors, it invests the proceeds in one of four private equity funds–PVT1, PVT2, PVT3, and PVT4.  These private equity funds, which are independently operated, compete with each other to provide equity financing to the corporate sector.   The financing can take the form of either an initial investment, in which a new company is created, or a follow-on investment, in which an existing company is given additional funds for expansion.

If we wanted to, we could broaden the scope of the funds, allowing them to make debt investments in addition to equity investments, not only in the corporate sector, but in the household and government sectors as well.  But we want to keep things simple, so we’re going to assume the following:

  • All private sector investments in Indexville are made through the corporate sector, and the financing is always through equity issuance.  There are no homeowners in Indexville, for example–everyone in Indexville rents out homes and apartments from corporations that own them.
  • Direct investment in Indexville’s government sector is not possible.  All federal, state and local Indexville paper is owned by the banking system, which is where the small amount of interest paid to cash depositors ultimately originates.  If investors want a risk-free option, they already have one: hold cash.

For now, we assume that buying INDX shares is a one way transaction.  The shares cannot be redeemed or sold after purchase.  Redeeming them would require de-capitalizing the corporate sector, which is unrealistic.  Trading them with other people is possible, but the option has been made illegal, consistent with Indexville’s moral ethos of total investment passivity: “Thou shalt not trade.”

Now, every existing company in Indexville was created through the above investment process, so INDX owns the entire corporate sector.  That’s what allows us to say that all INDX investors are invested “passively” relative to that sector.  Recall the definition we proposed for a passive allocation:

Passive Allocation: A passive allocation with respect to a given universe of assets is an allocation that holds every asset in that universe in relative proportion to the the total quantity of the asset in existence.

The total quantity of corporate assets in Indexville literally is INDX.  Whatever the relative proportion between the quantities of the different assets inside INDX happens to be, every investor will own them in that relative proportion, simply by owning INDX shares.

Now, to be fair, the investors in INDX are not passively allocated relative to the entire universe of Indexville’s financial assets, which includes both equity and cash.  Though such an allocation might be possible to achieve, it would clearly be impossible to maintain in practice.  Anytime anyone spent any money, the spender would decrease his cash, and the recipient would increase it, throwing off any previously passive allocation that might have existed.

The word “passive” can take on another important sense in these discussions, which is the sense associated with a “passive ethos”, where investors refrain from placing trades for speculative reasons, and instead only trade as necessary to carry out their saving and consumption plans, or to transition their portfolios to a state that matches their risk tolerances as they age.  This is the sense of the term found in the advice of the gurus of “passive” investing–Jack Bogle, Burton Malkiel, Eugene Fama, and so on.  Indexville investors follow that advice, and fully embrace a “passive ethos.”

Returning to the structure of Indexville, the excess profits of the individual companies in Indexville are periodically distributed back up to their respective private equity funds as dividends, and then back up to INDX, and then back up to the investors.  These dividends represent the fundamental source of INDX investor returns.  Without the potential to receive them, shares of INDX would be intrinsically worthless.

Instead of relying on equity issuance, corporations in Indexville have the option of funding new investment out of their own profits.  When they choose to do that, they reduce the dividends that they are able to pay to the private equity funds, which in turn reduces the dividends that the private equity funds are able to pay to INDX investors.  But the reduction is compensated for in the long-term, because the new investment produces real growth in future profits, which leads to real growth in future dividends paid.

Cost #1: Conducting Due-Diligence on New Investment Opportunities

To make sound investments–whether in new companies seeking initial capital, or in existing companies seeking follow-on capital–the private equity funds have to conduct appropriate due-diligence on candidate investment opportunities.  That due diligence requires research, which costs money–the funds have to hire smart people to go and do it.

Where will the money to pay these people come from?  Unfortunately, Indexville’s passive structure has no magic bullet to offer here–the money will have to come from the only place that it can come from: out of investor returns, as fees.  It will get subtracted from the dividends paid to the private equity funds, and therefore from the dividends paid to INDX, and therefore from the dividends paid to investors.

What we’ve identified, then, is the first unavoidable cost of a 100% passive system:

Cost #1: The need to pay people to conduct appropriate due-diligence on new investment opportunities, so as to ensure that INDX investor capital is allocated productively and profitably, at the best possible price.

Note that Cost #1 is not driven by a need to put a “trading” price on already-existing companies.  The trading prices of such companies do not matter to INDX investors, since INDX already fully owns them, and has no one to sell them to but itself.  Nor do the prices matter to the economy, because the physical capital that underlies them has already been formed.  The price that people decide to trade that capital at does not affect its underlying productivity.

The need that gives rise to Cost #1 is the need to determine whether a company is an appropriate target for new investment–taking new investor funds, new economic resources, and putting them into a startup, or a struggling company–and whether the investment will produce a better return than other available opportunities, given the proposed price.  Mistakes made in that determination have a direct negative impact on investor performance, and lead to suboptimal allocation of the economy’s labor and capital resources.

Let’s ask: what is a reasonable estimate of the total expense associated with Cost #1?  How much should we expect it to detract from INDX’s returns? The answer will depend on how much new investment the private equity funds engage in. Recall that most of the investment in Indexville will be carried out by already-existing companies, using retained profits as the funding source. The cost of the research done in association with that investment is already reflected in the net profits of the companies.

The U.S. has roughly 5,000 publicly-traded companies.  To be conservative, let’s assume that Indexville’s corporate sector is similarly sized, and that 1,000 companies–20% of the total–seek new funding each year, either in initial public offerings or, more commonly, in follow-on offerings.  We want Indexville’s capital to be allocated as productively as possible, so we assign two extremely talented analysts to each company, at an annual cost of $300,000–$200,000 in salary, and $100,000 in capital investment to support the research process.  For a full year, the analysts spend the entirety of every workday researching their assigned individual company, rigorously weighing the investment opportunity. The resulting cost:

1000 * 2* $300,000 = $600,000,000.

The U.S. equity market is valued at roughly $30T.  Expressed as a percentage of that overall total, the annual cost of investment due diligence in Indexville amounts to 0.002% of total assets.

Of course, we want to have the benefits of aggressive competition in Indexville.  So we multiply that cost by 4, to reflect the fact that each of the four funds will carry out the research program independently, in competition with each other.  To be clear, then, the four funds will each have 2,000 analysts–8,000 in total–who each spend an entire year effectively researching one-half of a single investment opportunity, at a cost of $300,000 per analyst per year.  The total annual expense: 0.008% of total marketable assets.

Consistent with Indexville’s emphasis on engineered competition, an advisory board for INDX has been created.  Its responsibility is to evaluate the long-term investment performances of the four private equity funds, and adjust the amount of future funding provided to them accordingly.  Funds that have made good investments get a larger share of the future investment pie, funds that have made bad investments get a smaller share. Similar adjustments are made to manager pay.  Managers of strongly performing funds see their pay increased, managers of poorly performing funds see their pay decreased.

What will the advisory board cost?  Assuming 50 people on it, at an annual cost of $500,000 per person, we get $25MM, which is 0.00008% of assets–in other words, nothing.

Cost #2: Determining a Correct Price for INDX Shares

When investors buy new shares in INDX, they have to buy at a price.  How is that price determined?  Recall that Indexville’s corporate sector is entirely private. There is no market for the individual companies.  Nor is there a market for INDX shares. It follows that the only way to value INDX shares is to calculate the value of each underlying company, one by one, summing the results together to get the total.  That’s going to cost money.

So we arrive at the second unavoidable cost of a 100% passive system:

Cost #2: The need to pay people to determine the values of the individual companies in the index, so that new shares of the index can be sold to investors at fair prices.

Cost #1 was a worthwhile cost.  The underlying work it entailed was worth doing in order to maximize INDX shareholder returns and ensure efficient capital allocation in Indexville’s economy.  Cost #2, in contrast, is a net drag on the system.  It has no offsetting benefit, and can be avoided if the participants embrace a cooperative mentality.

Suppose that a mistake is made in the calculation of INDX’s price, and an investor ends up paying slightly more for INDX shares than he should have paid.  The investor will see a slight reduction in his returns.  That reduction, of course, will occur alongside a slight increase in the returns of existing investors, who will have taken in funds at a premium.  In the end, the aggregate impact on the returns of the system will be zero, because price-setting is a zero-sum game.

Now, the direct, short-term impact of pricing mistakes may be zero-sum, but the long-term effect can be negative, particularly if the mistakes cause people to alter their future behaviors in undesirable ways.  If consistent inaccuracies are tolerated, people might try to find ways to “game” them.  They might also become disgruntled, and cease to participate in the investment process.  If that’s what’s going to happen, then we’re not going to have much of a choice: we’re going to have to spend the money needed to make sure the price is as right as it can be, for every new share sale, every day.  The cost will come out of everyone’s return, reflecting the drag on the system brought about by the non-cooperative mentality.

Given that Indexville’s corporate sector is roughly the same size as the public market of the United States, the task of accurately calculating the fund’s price will entail valuing roughly 5,000 individual companies.  The proposed plan is as follows.  As before, we hire two highly talented analysts for each company.  We give each analyst the singular responsibility of building a valuation estimate for the company, and adjusting the estimate on a daily basis to reflect any new information that becomes available.  The basic instruction to each analyst is:

  • Develop an estimate of the company’s future cash flows, and discount those cash flows at a required real rate of 6%, which is what Indexville investors consider to be a fair return.  Focus on getting the cash flows right.  Your pay will be tied to how well you do that.
  • Where there is uncertainty, identify the different possibilities, and weigh them by their probabilities.  Do not let the desire to be right in your past analysis influence your future analysis–instead, respond to the facts as they present themselves, with a singular focus on truth, i.e., “what is actually going to happen?”, rather than “what do I personally want to happen?”  If necessary, examine past valuation work done on similar companies, especially work done by top-performing analysts, and use their insights to help you triangulate towards a reasonable number.
  • Try to be consistent.  Don’t dramatically change your valuation estimates from day to day unless something relevant and impactful has taken place–e.g., the company has released an earnings report with important new information in it.

To get to a price for the INDX, we take the two valuation estimates produced for each company and average them together.  We then sum the resulting averages, arriving at a total market capitalization for the fund.  We then divide the fund’s market capitalization by its number of shares, arriving at a final price.  That price is the price at which we sell new shares to investors, with the sales, occurring each day, at the end of the day.

How much will this project cost annually?

5,000 * 2 * $300,000 = $3,000,000,000

But let’s assume that we really want to get the final numbers right, to ensure fairness for all Indexville investors.  So we double the effort.  We put four analysts on every company, instead of two.  The result:

5,000 * 4 * $300,000 = $6,000,000,000

So, $6,000,000,000.  Relative to the (assumed) total market capitalization of Indexville’s universe of companies, $30T, the cost represents 0.02% of assets.

Now, you might think that an actual market’s estimate of corporate value will be more accurate than the estimates of these 20,000 analysts.  But you have no evidence of that. What would evidence even look like?  Markets don’t publish their “future cash flow estimates”, nor do they reveal their “discount rates.”  These artifacts can only be arrived at tautologically, by extracting them together from the price.  And either way, it doesn’t matter.  As long as the citizen’s of Indexville believe that the estimates of the 20,000 analysts are accurate, and maintain their trust in the system, actual inaccuracies will not affect anything, because price-setting, again, is a zero-sum game.

Cost #3: The Provision of Liquidity

Up to this point, we’ve identified two costs that investors in Indexville will have to bear, given that they don’t enjoy the benefits of active management.

  • The first was the cost of conducting due-diligence on new investment opportunties. Actual cost: 0.008% of assets.
  • The second was the cost of valuing existing companies in the fund so as to determine a fair sale price for new shares.  Actual cost: 0.02% of assets.

Both costs proved to be extremely small.  Clearly, then, a large active segment is not needed to solve them, especially not at typical active fee levels, which can be anywhere from 25 to 100 times higher.

Unfortunately, there’s a huge problem with Indexville’s system that we haven’t addressed: the problem of liquidity.  An investor can convert cash into INDX by purchasing shares, but what if an investor wants to go the other way, and convert INDX shares back into cash?  In its current configuration, Indexville doesn’t offer the option to do that.

Unlike money, or even a debt security, an equity security is intrinsically illiquid.  It has no maturity date, and offers no direct method of redemption.  The cash that went into purchasing it is effectively gone.  The only way to directly get that cash back is to have the corporate sector liquidate–a solution that is not realistically possible, and that would require investors to take losses if it were.

What is the real cost of the illiquidity of an equity security?  Answer: the inconvenience imposed on the owner.  The owner has to factor the cost of the illiquidity into his pricing of the security.  If he is making an investment in a new company, and is taking a full equity stake in it, he will have to require a higher future return–higher not to make the overall deal any better for him, but simply to compensate him for the significant drawback associated with losing his ability to access his money.  The higher return hurdle that he will be forced to place on the process will disqualify investment prospects that would otherwise be beneficial to everyone, if they were liquid.  Similarly, if he is making an investment in an existing company owned by someone else, he is going to have to demand a lower price for his stake.  That lower price will come at the existing owner’s expense, rendering the issuance less attractive to the existing owner, and causing him to not want to take in funds and make investments that would otherwise be beneficial to everyone, if liquidity existed.

The cost of illiquidity is real, and therefore it’s important that we find a way to make INDX liquid.  So here’s a simple proposal: we give investors an option to sell–not to other people, but to INDX directly, in a mutual fund redemption process.  Every day, at the end of the day, we pair buyers and sellers of the fund, exchanging the incoming cash and the existing shares between them.  If you are seeking to buy, I pair you with a seller.  You take his shares, he takes your cash.  And vice-versa.  This won’t cost us anything extra, because we already know the fair value of the fund from the prior calculations that we paid for. Investors will then have liquidity.

Indexville’s citizens are likely to be averse to this suggestion, because it enables the toxic practice of market timing.  But there are legitimate, non-speculative reasons why an investor may want to sell shares.  For example, the investor may be entering a later of phase of her life, and may want to withdraw the funds to spend them down.  Either that, or she may simply prefer a more liquid allocation, and be willing to give up some return in exchange for it.  Some other investor may have similarly legitimate reasons for wanting to buy new shares–the investor might be in a younger phase of life, and have a greater need to save, or want more growth.  What is the harm in pairing the two together, exchanging the shares and cash between them?  The exchange may be zero-sum in terms of how the future returns get divided, but it will be positive sum in terms of its overall impact–both investors will end up winners.  This contrasts with a scenario where the two investors trade for speculative reasons, because one thinks the price is going to rise, and the other thinks the price is going to fall.  That trade will be zero-sum.  One trader is going to be right, and the other trader is going to be equally wrong.

Let’s assume that we’re able to convince the citizens of Indexville that there are legitimate uses for liquidity, and that everyone will ultimately benefit from the introduction of a sale option in INDX shares.  We’re still going to run into a problem.  How will the fund deal with periods in which there is more buying than selling, or more selling than buying?  In those periods, excess buyers and sellers will end up leftover, with orders that cannot be met. What will the fund do with them?

If there is more buying than selling, the fund can always issue new shares.  But the private equity firms may not need new funds.  There may not be any attractive investment opportunities–with returns of 6% or better–for them to deploy the funds into.  If we allow new shares to be created at 6%, when there are no 6%-or-better investments for the funds to invest in, we will be harming existing shareholders, diluting their stakes in exchange for nothing.

If there is more selling than buying, the situation becomes more complicated.  One option is to take the cash dividends of other investors, and give those dividends to the sellers, in exchange for their stake.  On this option, the existing investors will effectively be “reinvesting” their dividends.  But they may not want to do that.  And even if they do want to do it, the dividend payments in a given quarter may not be enough to fully meet the selling demand.  Another option would be for INDX to borrow cash to pay exiting shareholders.  On this option, the remaining investors will essentially be levering up their stakes.  Again, they may not want to do that.   And even if they do want to do it, the option of borrowing for that purpose may not be something that the banking system in Indexville is willing to make available.

Apart from forced dividend reinvestment and forced leveraging, the only other way for INDX to get the cash needed to pay the exiting investors would be to have the corporate sector liquidate, i.e., sell itself.  But, again, assuming that all of the capital that went into the funds and companies has been deployed, who exactly are the companies going to sell themselves to?  INDX is the only possible buyer, and it has no cash.  It’s trying to get cash, to redeem the investors that want to sell it.

A final option would be for INDX to change its price.  Instead of having investors exchange shares at a price associated with an implied rate of return of 6%, it could raise or lower the price–i.e., decrease or increase the implied rate of return–in order to discourage buying or selling and balance the demand.  Unfortunately, that won’t work.  Indexville’s investors are entirely passive.  They make investment decisions based on their personal savings and consumption needs, not based on valuation arbitrage, which they do not understand. INDX can raise its price to infinity, or drop its prize to zero–either way, the buying and selling demand of the system will not change.  The only thing that will change is that certain unlucky people will end up getting screwed–denied the “fair value” return that they deserve, 6%.

With the above options ruled out, INDX’s only remaining option will be to simply say no:

“We’ve received your sale order.  Due to market conditions, we are not currently able to execute it.  However, we’ve placed your name in a queue.  Currently, you are number 47 in the queue, with $400MM worth of selling demand in front of you.  Once we’ve found buyers to take on that selling demand, any subsequent buy orders will be routed through you.”

The problem with this option, of course, is that it represents a loss of investor liquidity. Investors will only have guaranteed immediate liquidity if they are lucky enough to be going against the prevailing investment flows.  Otherwise, they will have to wait.

But would that really create a problem?  It would certainly create a problem if Indexville’s investors were inclined to trade for speculative reasons.  The mere perception of impending excess buying or selling demand would incent them to buy or sell, reflexively exacerbating the ensuing flow imbalance.  But Indexville’s investors are not like that. Consistent with Indexville’s passive ethos, they only make trades for legitimate personal reasons–because they have savings to invest, or because they need to draw down on their savings in order to fund their lifestyles.

In certain situations where a large imbalance naturally develops in the economy’s saving preferences, the answer is, yes, it would create a problem.  One can imagine, for example, a scenario in which the average age of Indexville’s citizenry increases dramatically, resulting in a large population of older people with a preference for redeeming shares, and only a small population of younger people with a preference for purchasing them.  In such a scenario, the older generation will lose its liquidity.  But as long as Indexville’s economy remains capable of producing what its citizens want and need, any ensuing problem will be illusory–a problem that results entirely from the accounting artifacts–“shares”, “cash”, etc.–that are being used to divide up the economy’s purchasing power.  Policy intervention could easily solve it–for example, by having the government print cash and purchase the shares from the older generation at fair value.

Now, everything I’ve just said is based on the assumption that Indexville’s investors are fully passive, that they do not respond to valuation arbitrage.  If that’s not true, i.e., if they have even the slightest of speculative instincts, then the problems with the above proposal will compound.  Assuming that Indexville’s economic condition is similar to the current economic condition of the United States, the discount rate at which prices for the fund have been set, 6%, will be too high relative to the small interest rate, 1%, that can be earned on the alternative, cash.  The economy will not have a sufficient quantity of investment opportunities, with returns of 6% or better, in order to meet that saving demand.  INDX will therefore see a persistent excess of buying–especially if the buyers are reflexively aware that the economy’s strong saving demand will guarantee them future liquidity, should they ever want to sell.

But if Indexville’s investors are active, and respond to valuation arbitrage, then a previously discarded solution will work just fine.  All that INDX will have to do to balance the flows is change the price at which purchase and sale orders are executed.  If there is too much buying demand, raise the price–the lower implied return will lead to a reduction in buying demand.  If there is too little buying demand, lower the price–the higher implied return will increase the buying demand.  Of course, to embrace this solution, Indexville’s investors are going to have to abandon the notion that anything other than a 6% return is “unfair.”  If there is strong demand for saving in the economy, and if the private equity funds cannot find sufficient investment opportunities with a return above 6% to deploy that saving into, then lower returns will simply have to be accepted.

In the above case, there are tools that the government can use to create higher returns for investors.  But those tools come with separate costs.  It doesn’t make sense for the society to offer to bear those costs simply to improve the average investor’s lot.  The right answer is for investors to accept lower rates of return, which is fair compensation for their investment contributions, given the economy’s reduced need for them.

Returning to the assumption that Indexville’s investors are completely passive and insensitive to arbitrage, there will be no way to address flow imbalances other than by make people wait their turn.  The cost of that approach is lost liquidity.

Cost #3: the cost of providing liquidity in situations where the legitimately passive buying and selling flows going into and out of the index fail to balance out.

Unfortunately, there is no amount of money that Indexville can spend to cover that cost, i.e., to provide the desired liquidity.  For Indexville to have it, some investor somewhere will have to agree to go active.

Market-Making and Active Management

When you place an order in the market to buy a share of stock, how do you place it?  If you’re like most people, you place it either as a market order, or as a limit order above the lowest ask price.  You do that because you want the order to execute.  You want to buy now.  If you thought that a better time to buy would come later, you would just wait.

Instead of insisting on buying now, why don’t you just put a limit order in to buy below the prevailing price, and go on about your business?  Because you’re not stupid.  To put in a limit order below the market price, and then walk away, would be to give the rest of the market a 100% free put option on the stock, at your expense.  If the stock goes higher, the gains will end up going to someone else.  If the stock goes lower, you will end up buying it, and will get stuck with the losses.  Why would you agree to a let someone else have the gains on an investment, while you take the losses?

But wait.  There was a limit order to sell that was just sitting there–specifically, the order that you intentionally placed your buy price above, so as to guarantee an execution.  Where did that order come from?  Who was the person that just stuck it in there, and walked away?  What was his thinking?

The person who put that order in, of course, was the market-maker–the person who does the work necessary to create a liquid market.  Market orders do not come into the system at the same time.  Someone therefore has to bridge them together.  That is what a market-maker does.  When no one else is around, he puts a bid order in to buy at some price, and an ask order in to sell at some price, slightly higher.  He then waits for those that want to trade now to come in and take his orders.  In each round-trip transaction, each buy followed by a sell, he earns a spread, the spread between his bid and ask.

The following equation quantifies his revenue:

Revenue = 1/2 * Volume * Spread

So that is his revenue.  What is his cost?

His first cost is the cost of doing the work necessary to find out where the right price for the market is.  To ensure that he earns the spread, he has to identify the balancing point for the market, the general level at which the buying flow and the selling flow will roughly match.  That is the level around which he has to place his orders, so that roughly the same number of people buy from him as sell to him.  If he fails to target that level correctly–i.e., if he places his bid and ask orders at levels where the flows will become imbalanced–he will end up accumulating unwanted inventory.


His second cost is the cost of being wrong on the correct market price.  If he is wrong, and ends up accumulating unwanted inventory, he will have to either hold that inventory indefinitely, or liquidate it, both of which will occur at a cost.  To mitigate the impact of that cost, the market-maker will have to do at least a minimal amount of work to assess the value of what he is offering to hold.  He has to know what it’s actually worth, in some fundamental sense, in case he gets stuck holding it.

For market-making to be profitable, the revenue earned has to exceed the costs. There will inevitably be risk involved–one can never be sure about where the balancing point in the market is, or about what the inventory being traded is actually worth from a long-term perspective, or about the magnitude of the losses that will have to be taken if it has to be liquidated.  The associated risk is a risk to the market-maker’s capital.  For market-making to be attractive, that risk requires a return, which ends up being the total round-trip volume times the spread minus the costs and the trading losses, divided by the amount of capital deployed in the operation.

What Indexville needs is for an active investor to step up and become a market-maker for INDX, so that temporary imbalances in the quantities of buyers and sellers can be smoothed out.  The market-maker needs to figure out the patterns that define INDX transactions: When does there tend to be too much buying?  When does there tend to be too much selling?  If the market-maker can correctly answer those questions, he can bridge the excesses together, eliminating them through his trades.  That is what it means to “provide liquidity.”

Given that Indexville’s investors are passive and unresponsive to arbitrage, a potential market-maker won’t even have to worry about what the right price point for INDX shares is.  All that will matter is the future trajectory of the flow imbalances, which is entirely set by non-speculative preferences.  If the market-maker can accurately anticipate that trajectory, he will be able to set the price wherever he wants, earning as much profit as he wants, without having to unbalance his own holdings.

Will Indexville’s investors get bilked in this arrangement?  No.  If the market-maker imposes an unreasonably high spread–only offering to buy shares, for example, at an 8% implied return, and only offering to sell them, for example, at a 4% implied return–then other potential active investors will take note of his outsized profits, and enter the fray to compete with him, becoming market-makers themselves.  They will jump in front of his bids and asks, forcing him to either compress his spread, or let them get the executions. They will also force him to do more research, because they will now be there to arbitrage his mistakes.  Ultimately, the revenue that he is able to earn, and that the Indexville investment community will have to pay him, will converge on his cost of doing business, which is the cost of studying and understanding the market, figuring out what the underlying securities are actually worth, and risking the possibility of getting stuck with someone else’s illiquidity.

Notice that these are the same costs that, up to now, Indexville has had to pay separately, given that it has no active investors. When active investors enter to make a market, the costs no longer need to be paid separately–the active investors can provide the associated services, with the rest of the market compensating them in the form of market-making profits.

Enter Warren Buffett

In February of this year, Druce Vertes of StreetEye wrote an exceptionally lucid and insightful piece on the topic of Active Management.  In it, he asked the following question: what would happen if everyone in the market except Warren Buffett were to go passive? He showed, correctly, that Buffett could still beat the market.

In the scenario, when the passive segment of the market wants to sell, Buffett is the only potential buyer.  Buffett is therefore able to demand a low price, a discount to fair value.  When the passive segment wants to buy, Buffett is again the only potential seller.  He is therefore able to demand a high price, a premium to fair value.  On each go-round, each shift from passive net selling to passive net buying, Buffett buys significantly below fair value, and sells significantly above it, generating the superior returns that he is famous for.

What is Buffet doing in this example? He is earning the outsized revenues that inevitably come when you provide liquidity to a market that needs it. Those revenues easily cover the cost of providing that liquidity, which is the cost of understanding the psychology of the passive investors, so as to anticipate their net flows, and also the cost of determining the fair value of the underlying securities, to know what prices he can prudently pay for them, in case he gets stuck holding them.

What Indexville needs, then, is a Warren Buffett.  But how profitable can Buffett really be in Indexville?  The passive investors in Druce’s example appear to be dumb money investors that simply want to time the market, with Buffett as their counterparty.  Buffett, of course, is easily able to smoke them.  In Indexville, however, things aren’t going to be anywhere near as easy.  The only flows that Buffett will have at his disposal will be the minimal flows associated with a passive ethos.  Investors in Indexville buy in for one reason and one reason alone: because they have excess savings to invest.  They sell out, again, for one reason and one reason alone: because they’re at a point in their lives where they need the funds.  There is no speculation, no market timing, no giving away free money to Warren Buffett.  The profits that Buffett will be able to earn will therefore drop significantly.  And the economy as a whole will be better off for it, because a massive amount of zero-sum friction will have been removed, released for productive use elsewhere.

The Value of Active Management

In the previous two pieces, we expressed the Law of Conservation of Alpha, attributable to William Sharpe:

The Law of Conservation of Alpha: Before fees, the average performance of the investors that make up the active segment of a market will always equal the average performance of the investors that make up the passive segment.

This law does not apply to cases where we define “passive” in a way that allows the passive segment to trade with the active segment.  To account for such cases, we need a new law:

The Law of Conservation of Alpha, *With Liquidity: Before fees, the average performance of the investors that make up the active segment of a market will exceed the average performance of the investors that make up the passive segment, by an amount equal to the market-making profits that the active segment earns in providing liquidity to the passive segment.

With this law in hand, we are now in a position to answer the four questions posed at the outset.

First, what necessary functions does active management perform in a financial system?

Answer: The function of providing liquidity to the market, specifically to the passive segment, and to companies seeking funding.  In seeking to fulfill that function profitably, the active segment ends up providing the other functions of ensuring that new investment capital is correctly allocated and that existing assets get correctly priced, functions that Indexville had to pay for separately, because it lacked an active segment.

The provision of this liquidity obviously comes at a cost, the cost of doing the work necessary to know how the flows of the passive segment are likely to unfold, and also the work necessary to ascertain the underlying value of the existing securities being traded and the new investment opportunities being pursued.

Second, what is the optimal amount of active management to have in a financial system, to ensure that those functions are carried out?

Answer: The amount needed to ensure that adequate liquidity is provided to the market, specifically to the passive segment of the market and to companies seeking funding, assuming that the passive segment is not structured to provide that funding (as it was in Indexville).  If there is adequate liquidity, prices will tend to be correct; if they are incorrect, those participants that are causing them to be incorrect will be exposed to arbitrage by other participants, which will render their liquidity provision unprofitable.

At this point it’s important to note that self-described “passive” investors are themselves the ones that determine how much active management will be needed.  If the segment of the market that is proudly calling itself “passive” right now is only going to embrace “passivity” while things are good, morphing itself into an active, speculative, market-timing operation as soon as things turn sour, then a large number of active investors will be needed to come bail it out.  As a group, those active investors will get the opportunity to earn their fees and outperform.  But if investors come to their senses, and take a long view similar to the one embraced in Indexville, then the market’s liquidity needs will be minimal, and a market with a small active segment will be able to function just fine.

Third, if the size of the market’s active share falls below the optimal level, how will we know?

Answer: The market will show signs of not having enough liquidity.

If there is an insufficient amount of active management in the system, then whenever the passive segment introduces a net flow, prices will take on wild swings, and will become substantially disconnected from fair value.  The problem, however, will be self-correcting, because there will then be a greater financial incentive for investors to go active and take advantage of the lucractive reward being offered for providing liquidity.  The active segment as a whole will see stronger performance.

Fourth, how much can active managers, as a group, reasonably charge for their services, without acting as a net drag on the system?

Answer: An amount equal to the excess profit that they are able to earn in providing liquidity to the market, specifically to passive investors and companies seeking funding.

The active segment needs to earn its fees.  If it can’t earn its fees, then there will always be an arbitrage opportunity putting pressure on its size.  Investors will be able to get higher returns by going passive, and that’s what they’ll do.

The only way the active segment, as a group, can earn its fees, is by providing liquidity to the market, specifically to the passive segment, and to corporations seeking new funding. Of course, the active segment can trade with itself, attempting to arbitrage its own mistakes in that effort, but for every winner there will be a loser, and the net profit for the group will be nothing.  Any fees charged in the ensuing cannibalization will represent a loss to investors relative to the alternative–which is to simply go passive.

In terms of the larger benefit to society, the cannibalization may cause prices to become more “correct”, but how “correct” do they need to be?  And why does their “correctness” require such an enormous and expensive struggle?  Recall that in Indexville we were able to buy a decent substitute for the struggle–a 20,000-person army of highly talented analysts valuing the market–for a tiny fraction of the cost.

If we shrink down the size of the active segment of the market, it becomes easier for that segment to earn its fees through the liquidity that it provides–which it has to do in order to collapse the arbitrage, particularly if its fees are going to remain high as a percentage of its assets.  That is the path to an equilibrium in which active and passive generate the same return, net of fees, and in which there is no longer a reason to prefer one over the other.

Summary: Putting Everything Together

The points that I’ve tried to express in this piece are complex and difficult to clearly convey.  So I’m going to summarize them in bullet points below:

  • The right size of the active segment of the market is the size that allows it to earn whatever fees it charges.  If it cannot earn those fees, then there will always be an arbitrage opportunity putting pressure on its size.  Investors will be able to get higher returns by going passive, and so that’s what they’ll do.
  • The only way the active segment, as a group, can earn its fees, is by providing liquidity to the market, specifically to the passive segment, and to corporations seeking new funding.  All other profits for the segment will be zero-sum.
  • The question that determines the necessary amount of active management is this: how much natural passive flow is there in the system, i.e., non-speculative flow associated with people wanting to save excess income, or wanting to consume out of savings, or wanting to alter their risk allocations, given changes in their ages?  Similarly, what is the natural corporate demand for funding?  That flow, and that demand for funding, represents active management’s potential market for the provision of liquidity.  Relative to the actual volume that takes place in the current market, it is minimal.
  • If the investing ethos evolves such that market participants become decreasingly inclined towards active behaviors–i.e., if they learn their lessons, and stop making trades for speculative reasons, and instead take the sage advice of a Bogle, or a Malkiel, or a Fama to limit their interventions to cases where they have legitimate non-speculative reasons for modifying their portfolios–then the active segment will have to shrink dramatically from its current size.  Either that, or it will have to shrink its fees.
  • At some point in the above scenario–which, to be clear, is the Indexville scenario–the system will get to down to some bare-bones minimum level of active management, a level at which active managers, as a group, will be small enough to earn excess profits that are sufficient to cover the percentage-based fees they charge.  Where is that minimum level?  In my view, far away from the current level, at least at current fee rates.  If I had to randomly guess, I would say somewhere in the range of 5% active, 95% passive.  If we include private equity in the active universe, then maybe the number is higher–say, 10% active, 90% passive.
  • On the other hand, if we assume that the trend towards “passive investing” is simply about investors in the current bull market falling in love with $SPY or $VFIAX given the recent performance of U.S. equities, rather than about investors actually committing to the advice of a Bogle, or a Malkiel, or a Fama–then the future liquidity that those investors will consume when they change their minds will provide opportunities for what we’re calling the “active segment” to step in and earn its fees.  But this point will be purely definitional, because the investors we are calling “passive”, though technically passive relative to the S&P 500, will not have been genuinely passive at all, in terms of the spirit of what that term means.  They will have been active investors operating in guise, who just-so-happened to have been using index-based instruments in their efforts.
Posted in Uncategorized | Comments Off on The Value of Active Management: A Journey Into Indexville

The Impact of Index Investing: A Follow-Up

The prior piece received a much stronger reaction than I expected.  The topic is complicated, with ideas that are difficult to adequately convey in words, so I’m going to use this piece as a follow-up.  I’m going to look at the specific example of the Tech Bubble, which is a clear case in which the broad adoption of a passive approach would have removed negative skill from the market and substantially increased market efficiency.  I’m also going to offer some thoughts on the topics of differential liquidity, niche ETFs, and the drivers of inefficiencies in the area of asset allocation.

To begin, let me offer a brief comment on the topic of market efficiency.  Recall the definition:

Efficiency: The extent to which all securities in a market are priced to offer the same expected risk-adjusted future returns.

In this definition, we have to emphasize the term “expected” because returns have a random component, and investors can only evaluate returns based on information that is available to them at the time.  We also have to emphasize the term “risk-adjusted”, particularly when we try to talk about “efficiency” in terms of the relative pricing of securities that have different risk profiles–think: equities, bonds, cash.  We don’t expect an efficient market to price these asset classes for the same return, but we do expect any differences in return to be justified by other differences in the securities: differences in liquidity, volatility, loss potential, and so on.

Sometimes, you will see the term “efficiency” defined in terms of the relationship between a security’s price and its future cash flows.  For example:

Efficiency: The extent to which a security’s price matches the discounted sum of its future cash flows.

But this definition is not helpful, as the discount rate is not specified.  Any cash-producing security can be described as “efficiently” priced if we get to “make up” our own discount rates from nothing.  Now, to improve on the definition, we have the option of imposing the constraint that discount rates be applied consistently across different securities.  But then the definition reduces to the original definition, because a “discount rate” is nothing more than a required rate of return for an investor.  When we require that all cash flows in a space be discounted using the same rate, we are requiring that all securities offer the same rate of return, which is exactly where we started with the original definition.

The original definition of efficiency, which is the definition proposed by the person that invented the concept, is the best one for current purposes, because it directly connects “efficiency” with the concept of it being impossible to outperform in a space through active security selection.  Pick whatever securities you want in a space.  If the space is efficient, you will not be able to outperform, because every security you might pick will be offering the same return, adjusted for risk, as every other security.

Now, in the piece, I tried to make the following argument:

  • Passive investing increases the efficiency of the space in which it is being used.

I underline that last part because it’s crucial.  Passive investing does not increase the efficiency of spaces in which it is not being used, nor should we expect it to.  To use an example, investors might be bullish on U.S. equities relative to other asset classes.  They might express that bullishness using $SPY.  But the passivity of $SPY is incidental to what they’re doing.  They are expressing an active preference–a preference to overweight large cap U.S. equities and underweight everything else–using a passive vehicle.  The end result, a situation in which large cap U.S. equities are overvalued relative to everything else, would have been no different if they had expressed that same preference using an active vehicle–say, Fidelity Magellan fund.  Nor would it have been any different if they had simply gone into the space of large cap U.S. equities and made the picks themselves, based on whatever they happened to like.  The cause of the inefficiency has nothing to do with the passive or active nature of the vehicle used, and everything to do with the underlying bullish view that is driving its use–a view that is not passive, and that we are assuming is also not justified.

Now, the primary difference between using a passive vehicle and an active vehicle to express a view lies in the relative impact on the names inside the space that the vehicle is being used in.  The passive vehicle will tend to have a minimal relative impact, because it is buying a small amount of everything in the space, in proportion to the amount in existence.  The active vehicle will have a larger relative impact, because it is buying only what the manager–or the stock-picker–likes in the space.

To make the point maximally precise, suppose that the space in question is the space of [All U.S. Equities].  Note that when I put the term in brackets, I mean to refer to it as a space, a collection of securities:

[All U.S. Equities] = [$AAPL, $MSFT, $XOM, $JNJ, $GE, $BRK-B, and so on…]

We’re comparing two different ways of investing in [All U.S. Equities], active and passive. To invest actively in that space is to go in and buy whatever securities inside the space you happen to like, those that you think will perform the best.  To invest passively in the space is to own a little bit of everything in the space, in proportion to its supply, i.e., the “amount” of it in the space, i.e., its”weighting.”

When an unskilled or negatively skilled participant chooses to invest actively in [All U.S. Equities], he reduces the efficiency of the space.  He makes bad relative picks.  In doing so, he creates opportunities for other participants, those that do have skill, to make good relative picks at his expense, as his counterparties.

To illustrate using the momentum factor, suppose that we have an unskilled or negatively skilled participant whose strategy is to buy those stocks that have gone down the most over the last year.

“When I invest, I keep it simple!  I look for whatever is the most on sale!  The biggest and best bargains!”

This participant has no understanding of the fundamentals of the companies he ends up buying, no understanding of the attractiveness or unattractiveness of their valuations as businesses, and no knowledge of the momentum patterns that markets have historically tended to exhibit on 1 year lookback periods.  All he knows is the price–the more it has fallen, the more he wants to buy.  Obviously, this participant is going to create opportunities for skilled participants–those that are familiar with the fundamentals and valuations of the companies being traded, and that are aware of the market’s historical momentum patterns–to outperform at his expense.

To be fair, maybe you don’t believe that there is such a thing as “skill” in a market.  If that’s the case, then ignore what I’m saying.  You will already be an advocate of passive investing–anything else will be a waste of time and money on your view.  The argument, then, is not directed at you.

Now, suppose that Jack Bogle, and Burton Malkiel, and Eugene Fama, and the Behavioral Economists, and the robo-advisors, and so on, manage to convince this participant that he should not be investing in [All U.S. Equities] by making picks in the space himself–either of individual stocks or of expensive managers–but should instead be investing in the space passively–for example, by buying $VTI, the Vanguard Total U.S. Equity Market ETF.   To the extent that he takes their advice and does that, he will no longer be introducing his flawed stock-picking strategy into the space, where other investors are going to exploit it at his expense.  The efficiency of the space–i.e., the difficult of beating it through active security selection–will therefore increase.

Unfortunately, this is the place where things get muddled.  If we’re talking about the space of [All U.S. Equities], then passive investing, to the extent that it is preferentially used by lower-skilled participants in that space, will increase the space’s efficiency.  But, apart from that increase, it will not increase the efficiency of other spaces–for example, the broader space of [All U.S. Securities], where:

[All U.S. Securities] = [U.S. Equities, U.S. Bonds, U.S. Cash].

To illustrate, suppose that investors get wildly bullish on U.S. equities, and try to allocate their entire portfolios into the space, all at the same time.  Suppose further that the lower-skilled participants of this lot, in making the move, choose to heed the advice of the experts.  They shun the option of picking winners themselves, and instead buy into a broad equity index fund–again, $VTI.  Their approach will be passive relative to the space of [All U.S. Equities], but it will not be passive relative to the space of [All U.S. Securities]. Relative to [All U.S. Securities], it will, in fact, be extremely active.

If lower-skilled participants express their demand for U.S. equities by buying $VTI rather than picking stocks in the space themselves, they will have eliminated the market inefficiencies that their picks inside that space would otherwise have introduced. Consequently, the efficiency of the space of [All U.S. Equities] will increase.

But, and this is the crucial point: beyond that increase, the space of [All U.S. Securities] will not become more efficient. That’s because the approach of putting an entire portfolio into $VTI is not a passive approach relative to [All U.S. Securities].  Relative to that space, the approach is just as active as the approach of putting an entire portfolio into Fidelity Magellan, or an entire portfolio into [$FB, $AMZN, $NFLX, $GOOG] “because I buy what I know.” U.S. Equities will still get expensive in comparison with other asset classes, because a preferential bid is still being placed on them.  The fact that a passive vehicle, rather than an active one, is being used to place the bid is not going to change that result.

Now, a great example to use to concretely convey this point is the example of the market of the late 1990s.  Hopefully, we can all agree that that the market of the late 1990s had two glaring inefficiencies:

  • Inefficiency #1: In the space of [All U.S. Equities], some stocks–for example, new-economy technology stocks–were wildly expensive, well beyond what a rational analysis could every hope to justify.  Other stocks–for example, stocks with ties to the old-economy–were reasonably valued, especially in the small cap space.  You could have outperformed the market by buying the former and shorting the latter, and you could have known that at the time, assuming you weren’t drinking the kool-aid. Therefore, the market was inefficient.
  • Inefficiency #2: In the space of [All U.S. Securities], which again consists of [All U.S. Equities, All U.S. Bonds, All U.S. Cash], the U.S. Equities component, as a group, was much more expensive than the other two asset classes, priced for much lower returns, especially after adjusting for the dramatically higher levels of risk.  Again, you could have outperformed the market by holding cash and bonds, shunning equities, and you could have known that at the time, assuming you weren’t drinking the kool-aid.  Therefore, the market was inefficient.

Recall that back then, passive investing wasn’t even close to being a “thing.”  In fact, it was frowned upon:

“Vanguard?  What?  Why would you want to dilute your best holdings with companies that don’t have the potential to grow in the new economy?  Doing that will significantly reduce your returns.”

“Yes, you need to diversify.  But there’s a way to do it without sacrificing growth. What you need to do is this: mix your more established growth names–e.g., your Microsofts, your Intels, your Ciscos, your Yahoos, basically, the reliable blue chip stuff–with higher-risk bets–something like an Astroturf.com, or an Iomega, or a JDS Uniphase, that will offer high-powered upside if things work out for those companies. That will give your portfolio balance, without sacrificing growth.  And if you need someone to help you set that up, we can certainly help you do that.”


Since then, passive investing has become significantly more popular–so much so that a number of top investors have publicly described it as a “bubble” unto itself.  And to be fair to those investors, the increased popularity does have certain bubble-like characteristics. Gurus on TV advocating it?  Check.  New companies springing up to exploit it? Check. Rapid increase in AUM?  Check.

Now, suppose that passive investing had been as popular in 1999 as it is today.  Instead of expressing their cyclical equity bullishness by utilizing their own stock-picking expertise–or their expertise in picking stock-pickers, which was just as flawed, evidenced by the fact that everyone was piling into the same unskilled Technology managers–suppose that investors had simply used a broad market index fund: say, $VTI.  What would the impact have been?

With respect to inefficiency #2–i.e., the inefficiency that saw the U.S. equity market wildly overvalued relative to other asset classes–that inefficiency might not have been ameliorated.  Bullishness for an asset class tends to lead to overvaluation, regardless of the vehicles used to express it, whether passive ($VTI) or active (Fidelity’s top performing fund, you going into your E-Trade account and buying into a stock that your friend at work just made a bunch of money on).

But inefficiency #1–the inefficiency that saw certain parts of the U.S. equity market trade at obnoxious prices, while other parts traded quite reasonably–would definitely have been less severe.  It would have been less severe because the segment of the market that was doing the most to feed it–the lower-skilled retail segment that, in fairness, didn’t know any better–wouldn’t have been playing that game, or feeding that game by picking unskilled managers who were playing it.

This example, in my view, is a perfect example of how the widespread adoption of a passive approach can make a market more efficient.  Granted, it can only do that in the specific space in which it is being used.  Right now, it’s not being appreciably used in the broad space of [All U.S. Securities] or [All Global Securities].  But it is being used in the space of [All U.S. Equities].  We can debate the degree–and I acknowledge that the effect may be small–but I would argue to you that the use has made the space more efficient, even as we speak.

If you disagree, then do this.  Go find your favorite value investor, and ask him what his biggest gripe right now is.  This is what he will tell you:

“The problem with this market is that everything is overvalued.  Everything!  In 1999, I was able to hide out in certain places.  I was able to find bargains, especially on a relative basis.  Right now, I can’t find anything.  The closest thing to an opportunity would be the energy sector, but that sector is only cheap on the assumption of a rebound in energy prices.  If we assume that prices are not going to rebound, and are going to stay where they are for the long-term, then the typical energy name is just as expensive as everything else in the market.  Yuck!”

Here’s a question: what if that the widespread embrace of a passive approach with respect to U.S. equities in the current cycle is part of the reason for the broadness of the market’s expensiveness?  Personally, I don’t know if that’s part of the reason.  But it certainly could be.

Now, before you point to that possibility as evidence of the harm of passive investing, remember the definition of efficiency:

Efficiency: The extent to which all securities in a market are priced to offer the same expected risk-adjusted future returns.

On the above definition of “efficiency”, an expensive market that gives investor no way to escape the expensiveness–e.g., the 2013-2016 market–is actually more efficient than an expensive market with large pockets of untouched value–e.g., the 1999 market.  If the increased popularity of passive investing has, in fact, reduced the dispersion of valuation across the U.S. equity space–the type of dispersion that lower-skilled participants would be more inclined to give support to–then the outcome is consistent with the expectation. Passive investing has caused the U.S. equity market to become more efficient, harder to beat through active security selection.  The suckers have increasingly taken themselves out of the game, or have been increasingly forced out of the game, leaving the experts to fight among themselves.

A number of investors seem to want to blame passive investing for the fact that U.S. equities are currently expensive.  Doing that, in my view, is a mistake.  The expensiveness of U.S. equities as an asset class is not a new condition.  It’s a condition that’s been in place for more than two decades–long before passive investing became a “thing.”  If you want to blame it on something, blame it on Fed policy.  Or better yet, blame it on the causes of the low-inflation, low-growth economic condition that has forced the Fed to be accomodative, whatever you believe those causes to be.  The Fed has responded entirely appropriately to them.

It’s hard to blame the current market’s expensiveness entirely on the Fed, because there was a long period in U.S. history when the Fed was just as easy as it is today–specifically, the period from the early 1930s through the early 1950s.  In that period, equities did not become expensive–in fact, for the majority of the period, they were quite cheap.  Of course, that period had an ugly crash that damaged the investor psyche, at least in the earlier years.  But the same is true of the current period.  It has had two crashes, one with a full-blown crisis.  Yet our market hasn’t had any problem sustaining its expensiveness.

If we had to identify an additional factor behind the market’s current expensiveness, one factor worth looking at would be the democratization of investing.  Over the last several decades, investing in the market has become much cheaper in terms of the transaction costs, and also much easier in terms of the hoops that the average person has to jump through in order to do it.  It makes sense that this development would increase participation in the equity markets, and therefore increase prices.

Does it make sense to hold cash at an expected future real rate of, say 0%, when you can invest in equities and earn the historical real rate of 6%?  Clearly not.  In a period such as the late 1940s, the barriers to having a broad section of the population try to arbitrage away that differential may have been large enough to keep the differential in place.  But today, with all of the technological innovation that has since occurred, the barriers may no longer be adequate.  A 6% equity risk premium may simply be too high, which would explain why it has come down and stayed down.  To be honest, I’m not fully convinced of this line of reasoning, but it’s worth considering.

Returning to the current discussion, it’s important not to confuse the democratization of investing–the development of broad online access to investment options at negligible transaction costs–with the increased popularity of passive approaches.  These are fundamentally different phenomena.

It may be the case that in the current cycle, investors are using the familiar passive vehicles–$SPY and $VFIAX–to drive up the market.  But so what?  The passivity of the vehicles is not what is fueling the move.  What’s fueling the move is the bullish investor sentiment.  If, instead of being sold on the wisdom of a passive approach, investors were skeptical of it, then they would simply express their bullishness actively, piling into popular active mutual funds, or picking stock themselves–which is exactly what they did in 1999, long before the average investor even knew what $SPY was.

Before concluding, let me briefly address a few caveats:

First, liquidity.  Passive vehicles are not sensitive to differentials in liquidity.  So, to the extent that those differentials exist across a space, widespread use of passive vehicles in a space can create exploitable distortions.

To give an example, if a micro-cap ETF sees net inflows, it’s going to use those net inflows to make purchases in the space.  Given its passive approach, the specific purchases will be spread across the space, regardless of the liquidity differences in the individual names.  So, if there is a part of the space that is highly illiquid, prices in that part will get pushed up more than in other parts, creating a potential inefficiency.

If net flows go back and forth into the micro-cap ETF, it will engage in costly round-trips in the illiquid security, thereby losing money to active participants in the market–specifically, those that are making a market in the illiquid security.  In contrast with a passive ETF, an active ETF or mutual fund can avoid this outcome because it is able to direct its purchases towards liquid areas of a market, or at least factor the cost of illiquidity into its evaluations of potential opportunities.

As a rule, passive vehicles tend to generally be less “disruptive” in their purchases than active vehicles because they buy securities in proportion to the supply of shares outstanding.  In a sense, the demand they inject is intrinsically tethered to the available supply.  One caveat noted by @SmallCapLS, however, is that the actual supply that matters, the floated supply in the market, may be different from the total shares outstanding, which is what determines the weighting.  In situations where the float is tiny, and the outstanding share count very large, passive vehicles have the potential to be more disruptive.

The market impacts of these potential liquidity-related inefficiencies are minimal.  Not worth worrying about, especially for those investors that are not involved in illiquid spaces.

Second, niche ETFs.  No one has raised this specific objection, but after thinking the issue through, it would be hard to deny that certain highly niche ETFs–for example, the proposed 3-D printing ETF $PRNT–have the potential to create distortions, if not simply by giving inexperienced investors bad ideas, i.e., suckering them into making bets on spaces that they should not be making bets on.  But again, we shouldn’t pin the distortions onto the passivity of the funds.  With repect to 3D printing, for example, there’s an active mutual fund that offers (or offered) the exact same exposure: $TDPNX, the 3D Printing, Robotics, and Tech Investment fund.  If you look at the active mutual fund graveyard of the last 20 years, you will find plenty of the same types of funds, each of which managed to pull investors into a fad at exactly the wrong time.  It’s not a passive vs. active thing, but a feature of the business–gather assets, in whatever way works.

Third, when I said that nobody is using a passive approach relative to the broader space of global securities–i.e., all foreign and domestic equities, all foreign and domestic bonds, all foreign and domestic cash, owning each passively, in proportion to its outstanding supply–I did not mean to say that this could never be done.  It can definitely be done–there just isn’t a demand to do it.

Consider a hypothetical ETF–we’ll call it $GLOBE.  $GLOBE holds every liquid financial asset in existence, including currencies, in relative proportion to the supply in existence. Right now, investors don’t have an easy $GLOBE option–the closest option would be a robo-advisor.  The difference between $GLOBE and a robo-advisor, however, is that the robo-advisor uses a predefined allocation–e.g., 60% stocks, 40% bonds/cash–as opposed to a “market” allocation that is based on the outstanding supplies of the different asset classes.

Using a pre-defined allocation can create inefficiencies in cases where the supply of one of the asset classes is naturally tighter than the other.  When everyone tries to allocate in such a configuration, the tighter asset class gets pushed up in relative price and valuation, resulting in a potential inefficiency.  The dynamics of this process were discussed in more detail in a prior piece.

Returning to $GLOBE, in my view, a liquid fund in the mold of $GLOBE, if it caught on with the lower-skilled segment of the market, would make the market more efficient, not less.  Unskilled investors would be able to use the vehicle to remove their unskilled timing and rotation decisions from the market fray, reducing the opportunities of skilled investors to outperform at their expense.  In truth, the effect on broad market efficiency would probably be very small, if perceptible at all.  Regardless, markets would not be made any less efficient by it, as opponents of passive approaches want to suggest.

As markets continue to develop and adapt going forward, the investment area that I suspect will be the most likely to continue to present inefficiencies is the area of asset allocation.  Asset allocation is an area where a passive stance is the most difficult to fully embrace.  The temptation to try to “time” asset classes and “rotate” through them in profitable ways can be quite difficult to resist, particularly when conditions turn south. Unlike the game of individual stock-picking, lower-skilled investors are not likely to want to voluntarily take themselves out of that broader game–in fact, of all market participants, they may be the least inclined to want to do so.

Decisions on how to allocate among broad asset classes–equities, fixed income, and especially cash–are among the least professionally-mediated investment decisions that investors make.  If you want to invest in U.S. equities, there are tons of products available to help you.  These products eliminate the need for uninformed participants to make relative bets inside that space.

But the decision of what asset classes to invest in more broadly, or whether and when to invest at all–whether to just keep the money in the bank, and for how long–is a decision that is much more difficult to offload onto someone else, or onto a systematic process.  Even if an advisor is used, the client still has to make the decision to get started, and then the decision to stick with the plan.  “The stock market is doing really well, at all time highs.  But I’m not invested.  Should I be?”  “This isn’t working.  Our stuff is really down a lot.  Are we in the right setup here?”  Naturally, we should expect a space in which unskilled investors are forced to make these decisions for themselves–without experts or indices to rely on–to exhibit greater inefficiency, and greater opportunity.

Posted in Uncategorized | Comments Off on The Impact of Index Investing: A Follow-Up

Index Investing Makes Markets and Economies More Efficient

U.S. equity index funds have grown dramatically in recent decades, from a negligible $500MM in assets in the early 1980s to a staggering $4T today.  The consensus view in the investment community is that this growth is unsustainable. Indexing, after all, is a form of free-riding, and a market can only support so many free-riders.  Someone has to do the fundamental work of studying securities in order to buy and sell them based on what they’re worth, otherwise prices won’t stay at correct levels.  If too many investors opt out of that work, because they’ve discovered the apparent “free lunch” of a passive approach, active managers will find themselves in an increasingly mispriced market, with greater opportunities to outperform. These opportunities will attract funds back into the actively-managed space, reversing the trend. Or so the argument goes.

In this piece, I’m going to challenge the consensus view.  I’m going to argue that the trend towards passive management is not only sustainable, but that it actually increases the accuracy of market prices.  It does so by preferentially removing lower-skilled investors from the market fray, thus increasing the average skill level of those investors that remain. It also makes economies more efficient, because it reduces the labor and capital input used in the process of price discovery, without appreciably impairing the price signal.

The Irrelevance of Passive Share: A Concrete Example

There’s an important logical rule that often gets missed, or at least misunderstood, in discussions on the merits of active and passive management.  The best way to illustrate that rule is with a concrete example.  Consider a hypothetical market consisting of 100 different individual participants.  Each participant begins the scenario with an initial portfolio consisting of the following 5 positions:

  • 100 shares of Facebook ($FB)
  • 100 shares of Amazon ($AMZN)
  • 100 shares of Netflix ($NFLX)
  • 100 shares of Google ($GOOG)
  • $100,000 in cash.

We assume that these financial assets represent all financial assets in existence, and that the market is closed, meaning that new financial assets cannot enter or be created, and that existing financial assets cannot leave or be destroyed.  With each participant in possession of her initial portfolio, we open the market and allow trading to take place.


Now, in the first scenario, we assume that 10 out of the 100 participants choose to take a passive approach.  These participants recognize that they lack the skill necessary to add value through trading, so they opt to simply hold their initial portfolios as received.  The other 90 participants choose to take an active approach.  They conduct extensive fundamental research on the four underlying companies, and attempt to buy and sell shares based on their assessments of value.  Naturally, their assessments change as new information is received, and so the prices change.

Suppose that we mark the price of each security to market on December 31, 2016 and then again on December 31, 2017.


(Note: these prices do not represent predictions or investment advice)

We use the prices to determine the aggregate and average returns of the passive and active segments of the market.  The result:


As the table shows, the returns of the active and passive segments of the market end up being identical, equal to 7.7%.  The reason the returns end up being identical is that the two segments are holding identical portfolios, i.e., portfolios with the same percentage allocations to each security.  The allocations were set up to be the same at the outset of the scenario, and remain identical throughout the period because the two segments do not trade with each other.  Indeed, the two segments cannot trade with each other, because the passive segment has decided to stay passive, i.e., to not trade.

To be clear, the individual members of the active segment do trade with each other.  But their trades are zero-sum–for every share that an active investor buys, some other active investor is selling those exact same shares, and vice-versa.  Consequently, the aggregate “position” of the active segment stays constant at all times.  Because that position was initially set to be equal to the aggregate position of the passive segment, it stays equal for the entire period, ensuring that the aggregate returns will be equal as well.

Now, the consensus view is that if too many investors try to free-ride on a passive strategy, that the securities will become mispriced, creating opportunities for active investors to outperform.  To test this view, let’s push the scenario to an extreme.  Let’s assume that in the second scenario 98 of the investors initially opt to remain passive, with only two opting to trade actively.


Will the decrease in the number of active investors–from 90 in the first scenario down to 2 in the second–make it any easier for those investors to outperform as a group?  Not at all. The two investors that make up the active segment are going to have trade with each other–they won’t have anyone else to trade with.  Regardless of how they trade, their combined portfolios will remain identical in allocation to the combined portfolios of the 98 investors that are passively doing nothing.  The performances of the two segments will therefore remain the same, just as before.  This fact is confirmed in the table below, which shows both segments again earning 7.7%:


The Law of Conservation of Alpha

The prior example illustrates a basic logical rule that governs the relationship between the aggregate performances of the active and passive segments of any market.  I’m going to call that rule “The Law of Conservation of Alpha.”  It’s attributable to Eugene Fama and William Sharpe, and can be formally stated as follows:

The Law of Conservation of Alpha (Aggregate): The aggregate performance of the active segment of a market will always equal the aggregate performance of the passive segment.

Importantly, this rule applies before frictions are taken out–management fees, transaction costs, bid-ask spread losses, taxes, and so on.  Because the passive segment of a market tends to experience less friction than the active segment, the passive segment will typically outperform the active segment.

Now, the aggregate return of a group and the (asset-weighted) average return of each member of that group are essentially the same number.  We can therefore reframe the rule in terms of averages:

The Law of Conservation of Alpha (Average): Before fees, the average performance of the investors that make up the active segment of a market will always equal the average performance of the investors that make up of the passive segment.

The terms “active” and “passive” need to be defined.  An “active” allocation is defined as any allocation that is not passive.  A “passive” allocation is defined as follows:

Passive Allocation: An allocation that holds securities in relative proportion to the total number of units of those securities in existence.

Crucially, the term “passive” only has meaning relative to a specific universe of securities. The person using the term has to specify the securities that are included in that universe. In our earlier example, the universe was defined to include all securities in existence, which we hypothetically assumed to be $10MM U.S. dollars and 10,000 shares each of four U.S. companies–Facebook, Amazon, Netflix and Google.  In the real world, many more securities will exist than just these.  If we were to define the term “passive” to include all of them, then a “passive” portfolio would have to include all equity shares, all fixed income issues, all money in all currencies, all options, all warrants, all futures contracts, and so on–each in proportion to the total number of units outstanding.  And if we were to include non-financial assets in the definition, a “passive” portfolio would have to include all real estate, all precious metals, all art work, and so on–anything that has monetary value unrelated to present consumption.  The problem of determining the true composition of the “passive” segment of the market would become intractable, which is why we have to set limits on the universe of securities that we’re referring to when we speak of a “passive” approach.

To illustrate how we would build a passive portfolio, let’s focus on the universe of the 30 companies that make up the Dow Jones Industrial Average (“The Dow”).  To establish a passive allocation relative to that universe, we begin by quantifying the total number of existing shares of each company.  We then arbitrarily pick a reference company–in this case, 3M ($MMM)–and calculate the respective ratios between the number of shares outstanding of each company and that company, $MMM.   The ratios will determine the relative number of shares of each company that we will hold (see the right column below):


You can see in the chart that Home Depot ($HD) has 2.07 times as many shares outstanding as $MMM.  So, in our passive portfolio, we hold 2.07 times as many shares of $HD as $MMM.  And 6.79 times as many shares of Exxon-Mobil ($XOM).  And 13.07 times as many shares of Microsoft ($MSFT).  And so on.

Note that this formula doesn’t specify the absolute number of shares of each company that we would need to hold in order for our portfolio to be “passive.”  Rather, it specifies the relative ratio of number of shares of each company that we would need to hold.  In practice, we will choose the absolute number of shares that we hold based on the amount of wealth that we’re trying to invest in the passive strategy.

Now, mathematically, if we hold a share count of each security that is proportionate to the total number of shares in existence, we will also end up holding a market capitalization of each security–share count times price–that is proportionate to the total amount of market capitalization in existence.  This will be true regardless of price, as illustrated in the table below:


The table assumes you hold 1 share of $MMM, and therefore, by extension, 1.57 shares of American Express ($AXP), 8.93 shares of Apple ($AAPL), 1.07 shares of Boeing ($BA), and so on.  The prices of these securities determine both your allocation to them, and the aggregate market’s allocation to them, where “allocation” is understand in terms of market capitalization.  In the table above, the allocations–for both you and the market–end up being 1.89% to $MMM, 11.01% to $AAPL, 1.58% to $BA, and so on.

Obviously, prices will change over time.  The changes will cause changes in the aggregate market’s percentage allocation to each security.  But you won’t need to worry about those changes, because they will pass through to your allocation as well.  The price changes will affect your allocation in the exact same way that they affect the aggregate market’s allocation, ensuring that the two allocations remain identical in market capitalization terms–provided, of course, that you remain passive and refrain from trading.

A passive strategy is special because it is the only type of strategy that can remain in place in the presence of price changes without requiring investor action.  No other investment strategy is like that–value, momentum, pre-defined allocation, etc.–these strategies all require periodic “rebalancing” in order to preserve their defining characteristics.  A value portfolio, for example, has to sell cheap stocks that become expensive, and buy expensive stocks that become cheap, otherwise it will cease to meet the definition of “value.”  A momentum portfolio has to sell high-momentum stocks that lose momentum, and buy low-momentum stocks that gain momentum, otherwise it will cease to meet the definition of “momentum.”  A 60/40 stock/bond portfolio has to sell the asset class that appreciates the most, and buy the asset class that appreciates the least, otherwise it will cease to be a 60/40 portfolio, and will instead become a 63/37 portfolio, or a 57/43 portfolio, and so on. A passive portfolio, in contrast, can completely ignore any and all price changes that occur–it will remain “passively” allocated, i.e., allocated in proportion to the overall market supply, no matter what those changes happen to be.

Accounting Complications: A Solution

When we try to apply The Law of Conservation of Alpha to actual market conditions, we run into accounting complications.  For an illustration, consider the following sample of portfolios in the market:


How exactly would one go about applying the Law of Conservation of Alpha to these portfolios?  Recall the law itself:

The Law of Conservation of Alpha (Average): Before fees, the average performance of the investors that make up the active segment of a market will always equal the average performance of the investors that make up of the passive segment.

Look closely at the U.S. Equity Indexer’s portfolio.  The single position is expressed passively through an S&P 500 mutual fund, $VFIAX.  We might therefore call it “passive.” But it clearly isn’t “passive” with respect to the overall universe of existing securities, as there are many more securities in the world than just the equities contained in the S&P 500.

Of course, if we limit our universe to S&P 500 equities, then the U.S. Equity Indexer’s portfolio is passive (relative to that universe).  According to the Law of Conservation of Alpha, the portfolio’s performance will match the average performance of active managers that play in that same universe, i.e., active managers that own stocks in the S&P 500.  The problem, of course, is that the investment community cannot be cleanly separated out into “active managers that own stocks in the S&P 500.”  Many of the managers that own stocks in the S&P 500 universe, such as the Value Investor shown above, also own securities that do not fall into that universe.  In aggregate, their portfolios will not be identical to the S&P 500 in terms of allocation, and therefore the performances will not match.

Fortunately, there’s a way around this problem.  If our universe of concern is the S&P 500, then, to apply The Law of Conservation of Alpha to that universe, we identify all portfolios in the market that hold positions in S&P 500 constituents, long or short.  Inside each of those portfolios, we group the positions together, bracketing them out as separate entities with their own separate performances–“S&P 500 component funds” whose returns can be evaluated independently.  To illustrate, the S&P 500 component funds for the above portfolios are boxed in the image below:


To be clear, what we’re doing here is taking the securities in each portfolio that fall inside the S&P 500, and conceptually bracketing them out as separate vehicles to be analyzed–separate S&P 500 component funds to be tracked.  Some of the S&P 500 component funds in the group–for example, that of the U.S. Equity Investor who holds $VFAIX, and that of the Robo-Advisor who holds $SPY–are passively allocated relative to the S&P 500.  The allocations of those funds are identical to the allocation of the S&P 500, which is why we’ve boxed them in green.  Similarly, other S&P 500 component funds–for example, that of the Day Trader whose only S&P 500 holdings consist of Facebook, Amazon, Netflix, and Google, in equal market cap proportion–are actively allocated relative to the S&P 500. The allocations of those funds actively deviate from the allocation of the S&P 500, which is why we’ve boxed them in red.

We can reframe the Law of Conservation of Alpha in terms of component funds as follows:

The Law of Conservation of Alpha (Average, Component-Basis): For a given universe of securities, the average performance, before fees, of all active component funds associated with that universe will always equal the average performance, before fees, of all passive component funds associated with that universe.

Put simply, what the law is saying is that if we properly identify and delineate the active and passive component funds of a given universe in every portfolio, summing the active together and the passive together, the respective allocations to each security of the active and passive sums will be equal at all times.  It follows that the aggregate returns of the active and passive sums will be equal, before fees.

The Impact of Net Fund Flows

Consider the Vanguard S&P 500 index fund, $VFIAX.  What happens when it receives a net cash inflow?  The answer: it ceases to be allocated passively relative to the S&P 500.  It ends up owning cash, a security not contained in the S&P 500.  To become passive again, it has to go into the market and trade that cash for S&P 500 shares, in exact proportion to the amount in existence.  Until it does that, it will technically be in an active stance.

By coincidence, when it goes into the market, it may end up making the purchases from other S&P 500 index funds that are experiencing equivalent net cash outflows.  If that happens, then the transactions will be a wash for the passive segment of the market, and can be ignored.

If, however, there aren’t matching S&P 500 index fund outflows for it to pair up with and offset, then it will have to transact with the active segment of the market.  It will have to go into the market and buy shares from the active segment, becoming the active segment’s counterparty.  Whenever the two segments of the market–active and passive–become counterparties to each other, the previous logic ceases to apply.  It becomes possible for the active segment of the market to outperform the passive segment, in what might seem like a contradiction of The Law of Conservation of Alpha.

Importantly, The Law of Conservation of Alpha is only meant to apply once a passive portfolio has already been established.  If a passive portfolio is in place, and stays in place continuously for a period (because the passive investor refrains from trading), then the performance of the passive portfolio during that period will match the aggregate performance of the market’s active segment during that period.  The performances need not match in periods before the passive portfolio is established, or in periods after the passive portfolio has been lost through cash inflows and outflows.

So there you have the exception to the rule.  The performances of the active and passive segments of the market can deviate during those periods in which the passive segment is not truly passive.  In practice, however, such deviations will be small, because index funds act reasonably quickly to preserve their passive stance in response to net inflows and outflows–too quickly for the index to move away from them in either direction.  What they primarily lose to the active segment is the bid-ask spread.  Each time they buy at the ask, and sell at the bid, they give half of the associated spread to the active segment of the market, which includes the market makers that are taking the other sides of their orders. The loss, however, tends to be miniscule, or at least easily offset by the other profit-generating activities that they engage in (e.g., securities lending), evidenced by the fact that despite having to deal with large flows, index funds have done a fantastic job of tracking their “pure” indexes.  To offer an example, $VFIAX, the world’s largest passive S&P 500 index mutual fund, has tracked the S&P 500 almost perfectly over the years, which is why you only see one line in the chart below, when in fact there are two.


Now, to be clear, large net inflows and outflows into index funds can certainly perturb the prices of the securities held in those funds.  And so if, tomorrow, everyone decided to buy $VFIAX, the prices of S&P 500 companies would surely get bid up by the move.  But the situation would be no different if investors decided to make the same purchase actively, buying actively-managed large cap U.S. equity funds instead of $VFIAX.  The perturbation itself would have nothing to do with the passive structure of $VFIAX, and everything to do with the fact that people are attempting to move funds en masse into a single asset class, U.S. equities.  Relative to the universe of all securities, that’s a decidedly “not passive” thing to do.

The key difference between using $VFIAX to buy large cap U.S. equities, and using actively managed funds for the same purpose, lies in the effect on the relative pricing of S&P 500 companies.  The use of $VFIAX does not appreciably affect that pricing, because an equal relative bid is placed on everything in that index.  The use of actively managed funds, in contrast, does affect the relative pricing, because the relative bid on each company in the index will be determined by which names in the index the active managers who receive the money happen to prefer.

Intuitively, people tend to get hung up on the idea that index funds could somehow be counterparties to active managers, and yet continue to track the aggregate performances (before fees) of those same active managers.  The hangup is best illustrated with an example.  Suppose that I have perfect market clairvoyance, and know, with certainty, that $FB is going to double over the next year. Obviously, I’m going to go into the market and buy as much $FB as I can.  Suppose that in the actual transaction that ensues, I end up buying $FB from an S&P 500 index fund–say, $VFIAX–that happens to be experiencing a redemption request, i.e., a cash outflow.


When I buy $FB from $VFIAX, and $FB goes on to double, I’m obviously going to end up outperforming the S&P 500.  Who, then, is going to underperform the S&P 500? Secondary market transactions are zero-sum games for the participants, so if I’m going to outperform, someone will have to underperform.  The temptation is to say that $VFIAX is going to underperform, since it’s the one that’s foolishly selling the undervalued $FB shares to me. But, to the extent that $VFIAX is maintaining a passive stance, it’s going to handle the cash outflow by also selling all other stocks in the S&P 500, in exact relative proportion to the number of shares that exist.  For every 1 share of $FB that it sells to meet the redemption, it’s also going to sell 2.36 shares of $AAPL, 0.414 shares of $AXP, 0.264 shares of $MMM, and and so on, all in order to preserve its passive allocation relative to the S&P 500.  To the extent that it successfully preserves that allocation, its performance will continue to track the S&P 500’s performance.

Who, then, will underperform the S&P 500, to offset my outperformance?  The answer: the rest of the active market.  When $VFIAX sells the 500 stocks in the S&P 500, and I preferentially buy all the $FB shares that it’s selling, the rest of the active market will end up not buying those shares.  It will be forced to buy all of the other names in the S&P 500 that $VFIAX is selling, with $FB left out.  The rest of the market will therefore end up underweighting $FB relative to the S&P 500, offsetting my overweight.  When $FB goes on to double, the rest of the market will therefore underperform relative to the S&P 500, offsetting my outperformance.


The example demonstrates that in a liquid market, passive index funds can easily do what they need to do, making the transactions that they need to make in order to maintain their passivity, without perturbing relative prices.  If I try to overweight $FB and underweight everything else, then, in a normal market environment, the price of $FB should get bid up relative to everything else.  That’s exactly what’s going to happen in the above example, because I will be removing the selling pressure that the $VFIAX outflow is putting on $FB, without removing the selling pressure that the $VFIAX outflow is putting on everything else.  The $VFIAX outflow is entirely transparent to the market’s relative pricing mechanism, leaving prices just as they would have been in its absence.

Now, to be clear, in an illiquid market, with large bid-ask spreads, all bets are off.  As we noted earlier, when a passive index fund receives inflows and outflows, and has to transact with the active segment to maintain its passive stance, it loses the bid-ask spread.  If that spread is large, then there’s a viable pathway for active managers–specifically market-makers–to outperform at the passive fund’s expense.

The Impact of New Share Issuance and Share Destruction through Buybacks

A separate source of performance divergence emerges in the context of new share issuance.  Suppose that a fund is employing a passive index strategy relative to the total universe of publicly traded U.S. equities–all sizes: large, medium, small, and micro.  A new company then conducts an initial public offering (IPO).  That company will immediately be part of the universe of publicly-traded equities.  To maintain its passive allocation relative to that universe, the index fund in question will need to obtain a proportionate number of shares of the company.  There may be a delay–potentially a significant one–between the time that the IPO occurs and the time that the fund actually obtains the shares.  That delay will create a window for the fund’s performance to deviate from the performance of the market’s active segment.

Whether this type of divergence will result in outperformance or underperformance for index funds will depend on how newly issued securities tend to perform during the periods between their issuances and their entries into the index funds.  If, as a group, newly issued securities tend to perform better than the rest of the market during those periods, then index funds will underperform the market accordingly.  If, as a group, they tend to perform worse than the rest of the market during those periods, then passive funds will outperform the market accordingly.

Most passive funds have lower-limits on the market capitalizations of securities they hold–they only take in new companies when those companies reach certain sizes and levels of liquidity.  The temptation is to therefore conclude that such funds will tend to underperform the active segment of the market, given that they are only buying newly issued securities after those securities have appreciated substantially in price, with the appreciation having accrued entirely to the gain of the active segment of the market that holds them in their infancy.  This conclusion, however, ignores all of the newly issued securities that perform poorly after issuance.  Passive funds are able to avoid the losses that those securities would otherwise inflict.

In terms of the impact on index performance relative to the market, what matters is not how newly issued securities that successfully make it into indices perform relative to the market, but how all newly issued securities perform relative to the market, including those that never make it into indices.  In theory, we should expect newly issued securities to be priced so that their average performances end up roughly matching the average performance of the rest of the market, otherwise capitalist arbitrage would drive the market to issue a greater or lesser number of those securities, forcing a price adjustment.

Now, share buybacks create a similar challenge, but in reverse.  If a company buys back its shares and retires them, then, to stay passively-allocated, an index fund that holds the shares is going to have to adjust its allocation accordingly–it’s going to have to sell down the position, and proportionately reallocate the proceeds into the rest of the index.  The period prior to that adjustment will represent a window in which the fund is officially no longer passively-allocated, and in which its performance will have the potential to diverge from the performance of the market’s active segment.  Whether the divergence will amount to outperformance or underperformance will depend on how companies perform relative to the rest of the market shortly after share buybacks occur.

To summarize, net flows, new share issuance and share destruction through buybacks can, in theory, create divergences between the performances of the passive and active segments of the market.  But there’s little reason to expect the divergences to have a sizeable impact, whether favorable or unfavorable.  In terms of a concrete application of the Law of Conservation of Alpha, they can typically be ignored.

Flawed Comparisons: Apples to Apples, Oranges, and Grapes

The theory here is well-understood, but when we compare the average performance of active funds over the last 10 years to the average performance of index funds over the same period, we find that active funds have somehow managed to underperform their passive counterparts by more than their cost differentials.  Given the Law of Conservation of Alpha, how is that possible?

The answer, again, relates to inconsistencies in how asset universes get defined in common parlance, particularly among U.S. investors.  When investors in the United States talk about the stock market, they almost always mean the S&P 500.  But active equity funds routinely hold securities that are not in that index–small-caps, foreign companies, and most importantly, cash to meet redemptions.  In an environment such as the current one, where the S&P 500 has dramatically outperformed these other asset classes, it goes without saying that active funds, which have exposure to the asset classes, are going to underperform in comparison.  But this result says nothing about active vs. passive, as the comparison itself is invalid.

In a brilliant essay from last year, Neil Constable and Matt Kadnar of GMO proved the point empirically.  They showed that the percentage of active large cap managers who outperform the S&P 500 is almost perfectly correlated with the relative outperformance of foreign equities, small-caps, and cash–the three main ex-S&P 500 asset classes that active large cap managers tend to own.  The following chart, borrowed from their essay, illustrates:


If we clean up the discussion, and examine exposures on a component basis, we will find that the average active S&P 500 component fund in a portfolio–i.e., the average grouping of strictly S&P 500 companies inside a portfolio–performs the same, before fees, as the average passive S&P 500 index fund.  This is always be true, no matter what the S&P 500 index or any other index happens to be doing.

But Who Will Set Prices?

Active investors set prices in a market.  Passive investors can’t set prices, because they don’t trade securities on a relative basis.  They hold all securities equally, in relative proportion to the total quantity in existence.  They only make relative trades in those rare instances in which market activity forces them to–e.g., in response to new share issuance or share destruction through buybacks.

An efficient market needs active investors to be the ones setting prices, because active investors are the ones doing the fundamental work necessary to know what the securities themselves are actually worth.  For this reason, we tend to be uncomfortable with the idea that a market could function properly if the majority of its investors decided to go passive. We worry that the active investors in such a market wouldn’t be able to control enough money and flow to effectively do their jobs.

Setting price in a market, however, isn’t about controlling a certain amount of money or flow.  It’s simply about placing an order.  If all other players in a market have opted to be passive and not place orders, then “price” for the entire market can effectively be set by the single individual investor who does place an order, no matter how small she may be in size.

Why Passive Investing Increases the Market’s Efficiency

The view that passive investing undermines market efficiency is intuitively appealing and therefore widespread.  To quickly see the wrongness in the view, we need only ask ourselves the following question: is it any easier for a well-trained active stock-picker to beat the market today than it was in the early 1980s?  By “well-trained”, I mean “well-trained” relative to the state of industry knowledge at the time.  The answer, obviously, is no.  It’s just as hard to beat the market today as it was back then–in fact, by the results alone, we might think that it’s actually harder.  Yet the share of the market that is allocated passively has increased dramatically in the intervening period, a fact that is entirely inconsistent with the claim that passive indexing undermines market efficiency.

As it turns out, a much stronger argument can be made in favor of the view that passive investing increases market efficiency.  Before I present the argument, let me first say a few things about the concept of “market efficiency.”

We define an efficient market as follows:

Efficient Market: For a given group of securities, a market is efficient with respect to that group if all securities in the group are priced to offer the same expected risk-adjusted future returns.  In a broader sense, a market is unqualifiedly efficient if all of the securities traded within it are priced to offer the same expected risk-adjusted future returns.

To use an example, the group of 500 securities that comprise the S&P 500 represent an “efficient market” to the extent that every security in that group–$AAPL, $FB, $MMM, $XOM, and so on–is priced to offer the same expected risk-adjusted future return as every other security in the group.  More broadly, the collection of assets that comprise the overall asset universe–all stocks, bonds, all cash, all real-estate, everywhere–represent an “efficient market” to the extent that every asset in existence is priced to offer the same expected risk-adjusted future return as every other asset in existence–admittedly, a very high hurdle.

Empirically, the test for an efficient market is straightforward: can an investor consistently outperform the average of a market by selectively picking individual securities in the market to preferentially own?  If the answer is yes, then, necessarily, the individual securities in the market are not priced to offer the same expected risk-adjusted future returns, and therefore the market is not efficient.  It’s important to emphasize the word “consistent” here, as returns have a random component.  An investor in an efficient market can always make a lucky pick.  What an investor in an efficient market cannot do is reliably make such a pick over and over again.

The way that a market becomes efficient is through arbitrage.  If, for example, a given individual security in a market is offering a more attractive risk-adjusted future return than all of the other securities, and if investors know this, then they will try to buy that security, selling the others as necessary to raise funds.  But not everyone can own the security in question, and therefore its price in the market will rise.  As its price rises, its expected risk-adjusted future return will fall, bringing that return into congruence with the rest of the market.

We can think of this process as happening in two specific stages: source mistake and corrective arbitrage.

  • Source Mistake: The source mistake stage occurs when investors make trades that are fundamentally incorrect, e.g., when investors willingly buy securities at prices substantially above reasonable fair value, or willingly sell securities at prices substantially below reasonable fair value.  An example: investors might grow psychologically anchored to the historical price level of a given stock, rigidly assessing the attractiveness of the stock’s current price relative to that level, without considering fundamental changes that might have recently taken place, such as important news that might have recently been released.
  • Corrective Arbitrage: The corrective arbitrage stage occurs when other investors exploit the consequences of source mistakes for profit.  To use the above example: a separate group of savvy investors might come to realize that investors in general are failing to adequately incorporate recent news into their bids and offers.  Those investors might then implement strategies to take advantage of the source mistake–e.g., they might put in place systematic plans to quickly buy up companies after they release apparent good news, selling them as their prices rise, and quickly short companies after they release apparent bad news, covering them as their prices fall.

The ability of a market to attain efficiency depends on (1) the quantity and severity of source mistakes that its investors are committing, and (2) the amount and quality of corrective arbitrage that its investors are practicing in response to those mistakes.  If source mistakes are prevalent and severe, and few people are practicing corrective arbitrage in response to them, then the market will be less efficient, easier for a given active investor to beat.  If, however, source mistakes are rare and insignificant, and a large portion of the market is already attempting to arbitrage them, then the market will be more efficient, harder for a given active investor to beat.

Both stages are influenced by the skill levels of participants in the market.  If you raise the skill level of the participants, you will reduce the quantity and severity of source mistakes that occur, and you will increase the amount and quality of the corrective arbitrage, making the more efficient and harder to beat.  If you lower the skill level of the participants, you will do the opposite, making the market less efficient and easier to beat.

With respect to the potential impact of passive investing on market efficiency, the question that matters is this: does passive investing affect the “average” skill level of the active segment of the market?  In my view, the answer is yes.  It increases that skill level, and therefore makes the market more efficient.

How does passive investing increase the “average” skill level of the active segment of the market?  The answer: by removing lower-skilled participants, either by giving those participants a better option–i.e., the option of indexing–or nudging the wealth they’re ineptly managing into that option, putting them out of business.

In a world without indexing, uninformed investors that want exposure to a given market–e.g., retail investors that want exposure to U.S. equities, or hedge fund managers that want exposure to Brazilian equities–have to go into the market themselves and make “picks.” Either that, or they have to “pick” managers to make “picks” for them. Their “picks”–whether of individual stocks or of individual managers–are likely to be uninformed and unskilled relative to the picks of those that have experience in the market and that are doing the fundamental work necessary to know where the value is.

The option of investing in a low-cost index is impactful in that it allows investors to gain their desired exposures without having to make what would otherwise be unskilled, uninformed picks.  It allows them to own the market that they want to own, without forcing them to introduce their lack of skill into the market’s active segment.  In this way, it increases the average skill level of the market’s active segment, making the market more efficient, more difficult to beat.

A similar point applies to the Darwinian pressure that indexing places on active fund management businesses.  The growth in indexing has to come from somewhere. Where is it most likely to come from?  The answer: from actively managed funds that are consistently underperforming.  If you believe in skill, then you will probably agree that those funds are more likely to lack skill than funds that are consistently outperforming. In removing them, pressuring them out of business, indexing inadvertently increases the average skill level of the active funds that remain, again making the market more difficult to beat.

It’s important to remember, here, that secondary market trading and investing is a zero-sum game for the participants.  For a given market participant to outperform, some other market participant has to underperform.  Obviously, for a market participant with a given level of skill, the ease at which that participant will outperform will be a function of the quantity of unskilled participants that are there for the skilled participant to exploit.  To the extent that the prevalence of indexing preferentially reduces that quantity, it makes outperformance more difficult.

To summarize the point, indexing preferentially removes inexperienced, uninformed investors from the market, giving them a superior option–superior for them, because if they were to get into the fray, they would likely underperform.  It also preferentially undermines the businesses of low-skill managers that fail to produce adequate results and that lose business to index competition.  In this way, indexing concentrates the active share of the market into a select group of highly-skilled managers.  As an active market participant, you are no longer able to do battle with low-skill participants, and are instead forced do battle with them.  Think: the Bridgewaters, Third Points, Appaloosas, Bauposts, and GMOs of the world, just as examples.  If you think it will be easier to outperform in a market where these entities are your primary counterparties, as opposed to a market where their presence has been diluted by a flood of run-of-the-mill managers and retail investors that don’t know what they’re doing, you’re crazy.

Now, to be clear, the increased market efficiency that comes with the growth of indexing can only manifest itself in those specific areas of the market where indexing actually becomes popular–e.g., in the U.S. large cap equity space, where a staggering 34% of all shares are being held in index funds.  It’s not, however, going to manifest itself in broader areas where passive approaches remain unpopular.  Investors are quite willing to gain U.S. equity exposure passively through index funds such as $SPY and $VFIAX, but they don’t seem to be willing to extend the same passive discipline to the overall universe of assets. Doing so would require them to hold stocks, bonds, and cash from around the globe in proportion to the amounts outstanding.  Who wants to do that?  No one, and no one is going to want to do it in the future.

Going forward, then, we should expect lower-skilled players to remain active in the broader global asset allocation game–including through the controversial practice of market timing–creating opportunities for skilled investors to outperform at their expense. If there’s a place to stay aggressively active, in my view, it’s there–in global macro–definitely not in a space like large cap U.S. equities, where the passive segment is growing literally by the day.

The Grossman-Stiglitz Paradox and the Marketplace of Ideas Fallacy

In a famous paper published in 1980, Sanford Grossman and Joseph Stiglitz introduced what is called the Grossman-Stiglitz paradox.  This paradox holds that market prices cannot ever be perfectly efficient, for if they were, then investors would lack a financial incentive to do the work necessary to make prices efficient.

I want to clarify how the point that I’m making fits into this paradox.  To be clear, I’m not arguing that an entire market could ever go passive.  If an entire market were to go passive, there would be no transactions, and therefore no prices, and therefore no market. To have a market, someone has to actively transact–i.e., put out a bid and an offer.  That someone is going to have to at least believe that there is excess profit to be had in doing so.

What passive investing does is it shrinks the portion of the market that actively transacts. It therefore shrinks the portion of the market that sets prices.  But it doesn’t eliminate that portion, nor does it eliminate the profit motive that would drive that portion.  It simply improves the functioning of that portion, removing the unnecessary contributions of low-skilled participants from the fray.

I anticipate that a number of economists will want to resist this claim.  They will want to argue that adding lots of participants to the messy milieu of stock-picking can increase market efficiency, even if the majority of the participants being added lack skill.  The problem with this view, however, is that it conceptualizes the financial market as a “marketplace of ideas”–a system where, in evolutionary fashion, different ideas are continually presented, with good ideas filtering their way the top, and bad ideas filtering out.  In such a system, the introduction of diversity of any kind–even if it involves a large number of bad ideas and only a small number of good ones–has a positive effect on the final product, because the good ideas get latched onto, and the bad ideas disappear without harm.

Unfortunately, that’s not how “prices” are formed in a market.  Prices are formed through the individual transactional decisions of individual people.  There is no sense in which the good decisions in this lot rise to the top, nor is there any sense in which the bad decisions filter out.  All of the decisions impact the price, at all times.  If the average skill that underlies the decisions goes down, then so too will the quality of the ultimate product–the price.

Why Passive Indexing Makes Economies More Efficient

Imagine a world in which 500,000 individuals opt for careers in the field of active investment management.  In those careers, they continually compete with each other for excess profit, setting prices for the overall market in the process.  Some of the participants turn out to have skill, some turn out not to have skill–but each extracts a fee, compensation for the labor expended in the effort.  The benefit that the economy gets from the arrangement is a liquid market with accurate prices.

Eventually, indexing comes around to disrupt the industry. Of the 500,000 individuals that were previously managing funds, 499,500 go out of business, with their customers choosing the passive option instead.  The remaining 500–which are the absolute cream of the crop–continue to compete with each other for profit, setting prices for the overall market.

Ask yourself, what real advantage does the first arrangement have over the second? Why does an economy need to have 500,000 people arbitraging the price of $AAPL, $GOOG, $FB, and so on, when 500 of the market’s most highly-skilled active investors could do the job just as well, producing a price signal that is just as accurate, without requiring the help of the rest of the pack?  There is no advantage, only the disadvantage of the lost resources.

Free markets tend to naturally correct economic inefficiences.  To have a very large number of highly-talented people duplicating each other’s arbitrage efforts is an inefficiency, particularly in secondary markets where the actual physical capital that underlies the securities has already been formed and will not be affected by the market price.  Passive management represents its inevitable solution.

Conclusion: Can It Make Sense to Invest in an Active Manager?

We conclude the piece with the question: can it make sense to invest with an active manager?  If by “active manager” we’re talking about a financial advisor, then the answer is obviously yes.  A financial advisor can help otherwise financially uninformed clients avoid mistakes that have the potential to be far more costly than the fees charged, particularly if the fees charged are reasonable.  They can also improve clients’ state of mind, by allowing clients to offload stressful decisions onto a trusted expert.  But I’m directing the question more towards active fund managers–active stock-pickers that are trying to deliver better exposures to the market than an index fund can offer.  Can it make sense to invest with those managers?  In my view, the answer is also yes.

An analogy is helpful here.  Like secondary-market investing, law is a zero-sum game.  If you aggregate the individual performances of all civil attorneys, and you exclude cases where multiple attorneys work together in cases, the average win percentage of any given attorney will be 50%–in other words, average.  Does it follow that if you could eliminate fees by using an “average” attorney, that you should do so?  No, not at all.  The legal universe as a whole cannot do better than the average attorney, and so if the universe of litigants as a whole can avoid fees in the conduct of its zero-sum business, then it should. But you, yourself, considered as an individual litigant seeking a claim in your favor, are not the universe of litigants.  It’s perfectly conceivable that you–considered strictly as an individual case–might have the ability to find an attorney that is better than average.  If you have that ability, then settling for the average would be foolish, especially if you have lot at stake.

With respect to a legal case, what you need to ask yourself is this: do you believe that you have the insight into attorney skill to pick an attorney that is better than the average attorney–and not only better, but better by an amount sufficient to justify the fees that you will have to pay for the service?  Maybe, maybe not.  It depends on how good you are at identifying legal skill, and on how much your preferred attorney wants to charge you. Depending on the price, betting on that attorney’s skill may be worth it.

The same is true in the arena of investing.  Do you believe that you have the insight necessary to identify an active manager whose skill exceeds that of the aggregate market? More importantly, does your manager’s excess skill level, and the excess returns that will come from it, justify the fees that she’s going to charge?  If the fees are north of 200 bps, then, in my view, with the exception of a select group of highly talented managers, the answer is no.  Modern markets are reasonably efficient, and difficult to consistently beat. There are investors with the skill to beat them, but only the select few can do it consistently, and even fewer by such a large amount.

At the same time, many managers charge much lower fees–some as low as 50 bps, which is only 35 bps more than the typical passive fund.  That differential–35 bps–is almost too small to worry about, even when stretched out over long periods of time.  In my view, there are a number of active managers and active strategies that can justify it.

Posted in Uncategorized | Comments Off on Index Investing Makes Markets and Economies More Efficient

In Search of the Perfect Recession Indicator

The downturn in the energy sector and persistent economic weakness abroad has caused the investment community to become increasingly focused on the possibility of a U.S. recession.  In this piece, I’m going to examine a historically powerful indicator that would seem to rule out that possibility, at least for now.

The following chart (source: FRED) shows the seasonally-adjusted U.S. civilian unemployment rate (UE) from  January 1948 to January 2016:


As the chart illustrates, the unemployment rate is a lagging indicator of recession.  By the time high unemployment takes hold in an economy, a recession has usually already begun.

In contrast with the absolute level, the trend in the unemployment rate–the direction that the rate is moving in–is a coincident indicator of recession, and can sometimes even be a leading indicator.  As the table below shows, in each of the eleven recessions that occurred since 1948, the trend in the unemployment rate turned higher months before the recession began.  The average lead for the period was 3.45 months.


Admittedly, the phrase “turning higher” is ambiguous.  We need to be more precise, and so we’re going to define the phrase in terms of trailing moving averages.  That is, we’re going to say that the unemployment rate trend has turned higher whenever its current value crosses above the moving average of its trailing values over some period, and that the unemployment rate trend has turned lower whenever its current value falls below the average of its trailing values over some period.

In the following chart, we plot the unemployment rate alongside its trailing 12 month moving average from January 1948 to January 2016.  The red and green circles delineate important crossover points, with red crossovers delineating upward (bearish) turns, and green crossovers delineating downward (bullish) turns:


As you can see, historically, whenever the unemployment rate has crossed above the moving average, a recession has almost always followed shortly thereafter.  Similarly, for every recession that actually did occur in the period, the unemployment rate successfully foreshadowed the recession in advance by crossing above its moving average.

The following chart takes the indicator back farther, from April 1929 to April 1947:


In contrast with the earlier chart, the indicator here appears to be a bit late. After capturing the onset of the Great Depression almost perfectly, the indicator misses the onset of the 1937 and 1945 recessions by a few months.   It’s not alone in that respect–the 1937 and 1945 recessions were missed by pretty much every other recession indicator on the books.

The Fed is well aware of the recession forecasting power of the trend in the unemployment rate.  New York Fed president William Dudley discussed the matter explicitly in a speech just last month:

“Looking at the post-war period, whenever the unemployment rate has increased by more than 0.3 to 0.4 percentage points, the economy has always ended up in a full-blown recession with the unemployment rate rising by at least 1.9 percentage points. This is an outcome to avoid, especially given that in an economic downturn the last to be hired are often the first to be fired. The goal is the maximum sustainable level of employment—in other words, the most job opportunities for the most people over the long run.”

As far as the U.S. economy is concerned, the indicator’s current verdict is clear: no recession.  We may enter a recession later this year, or next year, but we’re not in a recession right now.

Individual energy-exposed regions of the country, however, are in recession, and the indicator is successfully flagging that fact.  The following chart shows the unemployment rate for Houston (source: FRED):


Per the indicator, Houston’s economy is solidly in recession.  We know the reason why: the plunge in oil prices.

Dallas is tilting in a similar direction.  But it’s a more diversified economy, with less exposure to oil and gas production, so the tilt isn’t as strong (source: FRED):


If the larger U.S. economy is not in a recession, what is the investing takeaway?  The takeaway is that we should be constructive on risk, with a bias towards being long equities, given the reduced odds of a large market drop.  Granted, recessions aren’t the only drivers of large market drops, but they’re one of the few drivers that give clear signs of their presence before the drops happen, so that investors can get out of the way. Where they can be ruled out, the risk-reward proposition of being long equities improves dramatically.

Now, the rest of this piece will be devoted to an rigorous analysis of the unemployment rate trend as a market timing indicator.  The analysis probably won’t make sense to those readers that haven’t yet read the prior piece on “Growth-Trend Timing”, so I would encourage them to stop here and go give it a skim.  What I say going forward will make more sense.

To begin, recall that GTT seeks to improve on the performance of a conventional trend-following market timing strategy by turning off the trend-following component of the strategy (i.e., going 100% long no matter what) during periods where the probability of recession is low.  In this way, GTT avoids substantial whipsaw losses, while incurring only a slightly increased downside risk.

Using the unemployment rate as an input, the specific trading rule for GTT would be:

(1) If the unemployment rate trend is downward, i.e., not indicating an oncoming recession, then go 100% long U.S. equities.

(2) If the unemployment rate trend is upward, indicating an oncoming recession, then defer to the price trend.  If the price trend is upward, then go 100% long U.S. equities.  If the price trend is downward, then go to cash.

To summarize, GTT will be 100% invested in the market unless the unemployment rate trend is upward at the same time that the price trend is downward.  Together, these indicators represent a double confirmation of danger that forces the strategy to take a safe position.

The following chart shows the strategy’s performance in U.S. equities from January 1930 to January 2016.  The unemployment rate trend is measured in terms of the position of the unemployment rate relative to its trailing 12 month moving average, where above signifies an upward trend, and below signifies a downward trend.  The price trend is measured in a similar way, based on the position of the market’s total return index relative to the trailing 10 month moving average of that index:


The blue line is the performance of the strategy, GTT.  The green line is the performance of a pure and simple moving average strategy, without GTT’s recession filter.  The dotted red line is the outperformance of GTT over the simple moving average strategy. The yellow line is a rolled portfolio of three month treasury bills. The gray line is buy and hold.  The black line is GTT’s “X/Y” portfolio–i.e., a portfolio with the same net equity and cash exposures as GTT, but achieved through a constant allocation over time, rather than through in-and-out timing moves (see the two prior pieces for a more complete definition).  The purple bars indicate periods where the unemployment rate trend is downward, ruling out recession.  During those periods, the moving average strategy embedded in GTT gets turned off, directing the strategy to take a long position no matter what.

As the chart illustrates, the strategy beats buy and hold (gray) as well as a simple moving average (green) strategy by over 150 basis points per year.  That’s enough to triple returns over the 87 year period, without losing any of the moving average strategy’s downside protection.

In the previous piece, we looked at the following six inputs to GTT:

  • Real Retail Sales Growth (yoy, RRSG)
  • Industrial Production Growth (yoy, IPG)
  • Real S&P 500 EPS Growth (yoy, TREPSG), modeled on a total return basis.
  • Employment Growth (yoy, JOBG)
  • Real Personal Income Growth (yoy, RPIG)
  • Housing Start Growth (yoy, HSG)

We can add a seventh input to the group: the unemployment rate trend (UE vs. 12 MMA). The following table shows GTT’s excess performance over a simple moving average strategy on each of the seven inputs, taken individually:


As the table shows, the unemployment rate trend beats all other inputs. To understand why it performs better, we need to more closely examine what GTT is trying to accomplish.

Recall that the large market downturns that drive the outperformance of trend-following strategies tend to happen in conjunction with recessions.  When a trend-following strategy makes a switch that is not associated with an ongoing or impending recession, it tends to incur whipsaw losses.  (Note: these losses were explained in thorough detail in the prior piece).

What GTT tries to do is use macroeconomic data to distinguish periods where a recession is likely from periods where a recession is unlikely.  In periods where a recession is unlikely, the strategy turns off its trend-following component, taking a long position in the market no matter what the price trend happens to be.  It’s then able to capture the large downturns that make trend-following strategies profitable, without incurring the frequent whipsaw losses that would otherwise detract from returns.

The ideal economic indicator to use in the strategy is one that fully covers the recessionary period, on both sides.  The following chart illustrates using the 2008 recession as an example:


We want the red area, where the recession signal is in and where the trend-following component is turned on, to fully cover the recessionary period, from both ends. If the signal comes in early, before the recession begins, or goes out late, after the recession has ended, the returns will not usually be negatively impacted.  The trend-following component of the strategy will take over during the period, and will ensure that the strategy profitably trades around the ensuing market moves.

What we categorically don’t want, however, is a situation where the red area fails to fully cover the recessionary period–in particular, a situation where the indicator is late to identify the recession.  If that happens, the strategy will not be able to exit the market on the declining trend, and will risk of getting caught in the ensuing market downturn.  The following chart illustrates the problem using the 1937 recession as an example:


As you can see, the indicator flags the recession many months after it has already begun. The trend-following component therefore doesn’t get turned on until almost halfway through the recessionary period.  The risk is that during the preceding period–labeled the “danger zone”–the market will end up suffering a large downturn.  The strategy will then be stuck in a long position, unable to respond to the downward trend and avoid the losses.  Unfortunately for the strategy, that’s exactly what happened in the 1937 case.  The market took a deep dive in the early months of the recession, before the indicator was flagging.  The strategy was therefore locked into a long position, and suffered a large drawdown that a simple unfiltered trend-following strategy would have largely avoided.

We can frame the point more precisely in terms of two concepts often employed in the area of medical statistics: sensitivity and specificity.  These concepts are poorly-named and very easy to confuse with each other, so I’m going to carefully define them.

The sensitivity and specificity of an indicator are defined as follows:

  • Sensitivity: the percentage of actual positives that the indicator identifies as positive.
  • Specificity: the percentage of actual negatives that the indicator identifies as negative.

To use an example, suppose that there are 100 recessionary months in a given data set.  In 86 of those months, a recessionary indicator comes back positive, correctly indicating the recession.  The indicator’s sensitivity to recession would then be 86 / 100 = 86%.

Alternatively, suppose that there are 700 non-recessionary months in a given data set.  In 400 of those non-recessionary months, a recessionary indicator comes back negative, correctly indicating no recession. The indicator’s specificity to recession would then be 400 / 700 = 57%.

More than anything else, what GTT needs is an indicator with a high sensitivity to recession–an indicator that rarely gives false negatives, and that will correctly indicate that a recession is happening whenever a recession is, in fact, happening.

Having a high specificity to recession, in contrast, isn’t as important to the strategy, because the strategy has the second layer of the price trend to protect it from unnecessary switches.   If the indicator sometimes overshoots with false positives, indicating a recession when there is none, the strategy won’t necessarily suffer, because if there’s no recession, then the price trend will likely be healthy.  The healthy price trend will keep the strategy from incorrectly exiting the market on the indicator’s mistake.

Of all the indicators in the group, the unemployment rate trend delivers the strongest performance for GTT because it has the highest recession sensitivity.  If there’s a recession going on, it will almost always tell us–better than any other single recession indicator.  In situations where no recession is happening, it may give false positives, but that’s not a problem, because unless the false positives coincide with a downward trend in the market price–an unlikely coincidence–then the strategy will stay long, avoiding the implied whipsaw.

For comparison, the following tables show the sensitivity and specificity of the different indicators across different time periods:




As the tables confirm, the unemployment rate  has a very strong recession sensitivity, much stronger than any other indicator.  That’s why it produces the strongest performance.

Now, we can still get good results from indicators that have weaker sensitivities.  We just have to aggregate them together, treating a positive indication from any of them as a positive indicatation for the aggregate signal.  That’s what we did in the previous piece. We put real retail sales growth and industrial production growth together, housing start growth and real personal income growth together, and so on, increasing the sensitivity of the aggregate signal at the expense of its specificity.

Right now, only two of the seven indicators are flagging recession: industrial production growth and total return EPS growth.  We know why those indicators are flagging recession–they’re getting a close-up view of the slowdown in the domestic energy sector and the unrelated slowdown in the larger global economy.  Will they prove to be right?  In my view, no.  Energy production is a small part of the US economy, even when multipliers are considered.  Similarly, the US economy has a relatively low exposure to the global economy, even though a significant portion of the companies in the S&P 500 are levered to it.

Even if we decide to go with industrial production growth (or one of its ISM siblings) as the preferred indicator, recent trends in that indicator are making the recession call look shakier.  In the most recent data point, the indicator’s growth rate has turned up, which is not what we would expect to be seeing right now if the indicator were right and the other indicators were wrong:


Now, the fact that a U.S. recession is unlikely doesn’t mean that the market is any kind of buying opportunity.  Valuation can hold a market back on the upside, and the market’s current valuation is quite unattractive.  At a price of 1917, the S&P 500’s trailing operating P/E ratio is 18.7.  Its trailing GAAP P/E ratio is 21.5.  Those numbers are being achieved on peaking profit margins–leaving two faultlines for the market to crack on, rather than just one.  Using non-cyclical valuation measures, which reflect both of those vulnerabilities, the numbers get worse.

My view is that as time passes, the market will continue to acclimatize to the two issues that it’s been most worried about over the last year: (1) economic weakness and potential instability in China and (2) the credit implications of the energy downturn.  A similar acclimatization happened with the Euro crisis.  It always seems to happen with these types of issues.  The process works like this.  New “problems” emerge, catching investors off-guard.  Many investors come to believe that this is it, the start of the “big” move lower. The market undergoes a series of gyrations as it wrestles with the problems. Eventually, market participants get used to them, accustomed to their presence, like a swimmer might get accustomed to cold water.  The sensitivity, fear and reactivity gradually dissipate. Unless the problems continue to deteriorate, investors gravitate back into the market, even as the problems are left “unsolved.”

Right now, there’s a consensus that an eventual devaluation of the yuan, with its attendant macroeconomic implications, is itself a “really bad thing”, or at least a consequence of a “really bad thing” that, if it should come to pass, will produce a large selloff in U.S. equities.  But there’s nothing privileged or compelling about that consensus, no reason why it should be expected to remain “the” consensus over time.  If we keep worrying about devaluations, and we don’t get them, or we do get them, and nothing bad happens, we will eventually grow less concerned about the prospect, and will get pulled back into the market as it grinds higher without us.  In actuality, that seems to be what’s already happening.

Valuation-conscious investors that are skeptical of the market’s potential to deliver much in the way of long-term returns–and I would include myself in that category–do have other options.  As I discussed in a piece from last September, we can take advantage of elevated levels of volatility and sell puts or covered calls on a broad index such as the S&P 500 or the Russell 2000.  By foregoing an upside that we do not believe to be attractive to begin with, we can significantly pad our losses in a potential downturn, while earning a decent return if the market goes nowhere or up (the more likely scenario, in my view).

To check in on the specific trade that I proposed, on September 6th, 2015, with $SPY at 192.59, the bid on the 165 September 2016 $SPY put was 8.79.   Today, $SPY is at essentially the same price, but the value of the put has decayed substantially.  The ask is now 4.92.  On a mark-to-market basis, an investor that put a $1,000,000 into the trade earned roughly $4 per share–$24,000, or roughly 6% annualized, 6% better than the market, which produced nothing.

For the covered call version of the trade, the bid on the 165 September 2016 call was 33.55.  As of Friday, the ask is now 30.36.  On a mark-to-market basis, then, the investor has earned roughly $3.20 in the trade.  The investor also pocketed two $SPY dividends, worth roughly $2.23.   In total, that’s $5.43 per share, or roughly 8% annualized.  If the market continues to churn around 1900, the investor will likely avoid assignment and get to stay in the trade, if not through both of the upcoming dividends, then at least through the one to be paid in March.

To summarize, right now, the data is telling us that a large, recessionary downturn is unlikely.  So we want to be long.  At the same time, the heightened state of valuations and the increasing age of the current cycle suggest that strong returns from here are unlikely.   In that kind of environment, it’s attractive to sell downside volatility.  Of course, in selling downside volatility, we lose the ability to capitalize on short-term trading opportunities. Instead of selling puts last September, for example, we could have bought the market, sold it at the end of the year at the highs, and then bought it back now, ready to repeat again. But that’s a difficult game to play, and an even more difficult game to win at.  For most of us, a better approach is to identify the levels that we want to own the market at, and get paid to wait for them.

While we’re on the topic of GTT and recession-timing, I want to address a concern that a number of readers have expressed about GTT’s backtests.  That concern pertains to the impact of data revisions.  GTT may work well with the revised macroeconomic data contained in FRED, but real-time investors don’t have access to that data–all they have access to is unrevised data.  But does the strategy work on unrevised data?

Fortunately, it’s possible (though cumbersome) to access unrevised data through FRED. Starting with the unemployment rate, the following chart shows the last-issue revised unemployment rate alongside the first-issue unrevised unemployment rate from March 1961 to present:


As you can see, there’s essentially no difference between the two rates.  They overlap almost perfectly, confirming that the revisions are insignificant.

Generally, in GTT, the impact of revisions is reduced by the fact that the strategy pivots off of trends and year-over-year growth rates, rather than absolute levels and monthly growth rates, where small changes would tend to have a larger effect.

The following chart shows the performance of GTT using from March 1961 to present using first-issue unrevised unemployment rate data (orange) and last-issue revised unemployment rate data (blue).  Note that unrevised data prior to March 1961 is not available, which is why I’ve chosen that date as the starting point:


Interestingly, in the given data set, the strategy actually works better on unrevised data. Of course, that’s likely to be a random occurrence driven by luck, as there’s no reason for unrevised data to produce a superior performance.

The following chart shows the performance of GTT using available unrevised and revised data for industrial production growth back to 1928:

IPG rvur

In this case, the strategy does better under the revised data, even though both versions outperform the market and a simple moving average strategy.  The difference in performance is worth about 40 basis points annually, which is admittedly significant.

One driver of the difference between the unrevised and revised performance for the industrial production case is the fact that the unrevised data produced a big miss in late 2011, wrongly going negative and indicating recession when the economy was fine.  Recently, a number of bearish commentators have cited the accuracy of the industrial production growth indicator as a reason for caution, pointing out that the indicator that has never produced sustained negative year over year growth outside of recession.  That may be true for revised data, but it isn’t true for unrevised data, which is all we have to go on right now.  Industrial production growth wrongly called a recession in 2011, only to get revised upwards several months later.

The following chart shows the performance of GTT using available unrevised and revised data for real retail sales growth:

RRSG rvur

The unrevised version underperforms by roughly 20 basis points annually.

The following chart shows the performance of GTT using available unrevised and revised data for job growth:

JOBG rvur

For job growth, the two versions perform about the same.

Combining indicators shrinks the impact of inaccuracies, and reduces the difference between the unrevised and revised cases.  The following chart illustrates, combining industrial production and job growth into a single “1 out of 2” indicator:


Unfortunately, unrevised data is unavailable for EPS (the revisions would address changes to SEC 10-Ks and 10-Qs), real personal income growth, and housing start growth.  But the tests should provide enough evidence to allay the concerns.  The first-issue data, though likely to be revised in small ways, captures the gist of what is happening in the economy, and can be trusted in market timing models.

In a future piece, I’m going to examine GTT’s performance in local currency foreign equities.  GTT easily passes out-of-sample testing in credit securities, different sectors and industries, different index constructions (where, for example, the checking days of the month are chosen randomly), and individual securities (which simple unfiltered trend-following strategies do not work in).  However, the results in foreign securities are mixed.

If we use U.S. economic data as a filter to time foreign securities, the performance turns out to be excellent.  But if we use economic data from the foreign countries themselves, then the strategy ends up underperforming a simple unfiltered trend-following strategy.  Among other things, this tells us something that we could probably have already deduced from observation: the health of our economy and our equity markets is more relevant to the performance of foreign equity markets than the health of their own economies.  This is especially true with respect to large downward moves–the well-known global “crises” that drag all markets down in unison, and that make trend-following a historically profitable strategy.

Posted in Uncategorized | Comments Off on In Search of the Perfect Recession Indicator

Growth and Trend: A Simple, Powerful Technique for Timing the Stock Market

Suppose that you had the magical ability to foresee turns in the business cycle before they happened.  As an investor, what would you do with that ability?  Presumably, you would use it to time the stock market.  You would sell equities in advance of recessions, and buy them back in advance of recoveries.

The following chart shows the hypothetical historical performance of an investment strategy that times the market on perfect knowledge of future recession dates.  The strategy, called “Perfect Recession Timing”, switches from equities (the S&P 500) into cash (treasury bills) exactly one month before each recession begins, and from cash back into equities exactly one month before each recession ends (first chart: linear scale; second chart: logarithmic scale):


As you can see, Perfect Recession Timing strongly outperforms the market.  It generates a total return of 12.9% per year, 170 bps higher than the market’s 11.2%.  It experiences annualized volatility of 12.8%, 170 bps less than the market’s 14.5%.  It suffers a maximum drawdown of -27.2%, roughly half of the market’s -51.0%.

In this piece, I’m going to introduce a market timing strategy that will seek to match the performance of Perfect Recession Timing, without relying on knowledge of future recession dates.  That strategy, which I’m going to call “Growth-Trend Timing”, works by adding a growth filter to the well-known trend-following strategies tested in the prior piece.  The chart below shows the performance of Growth-Trend Timing in U.S. equities (blue line) alongside the performance of Perfect Recession Timing (red line):


The dotted blue line is Growth-Trend Timing’s outperformance relative to a strategy that buys and holds the market.  In the places where the line ratchets higher, the strategy is exiting the market and re-entering at lower prices, locking in outperformance.  Notice that the line ratchets higher in almost perfect synchrony with the dotted red line, the outperformance of Perfect Recession Timing.  That’s exactly the intent–for Growth-Trend Timing to successfully do what Perfect Recession Timing does, using information that is fully entirely available to investors in the present moment, as opposed to information that will only be available to them in the future, in hindsight.

The piece will consist of three parts:  

  • In the first part, I’m going to construct a series of random models of security prices. I’m going to use the models to rigorously articulate the geometric concepts that determine the performance of trend-following market timing strategies.  In understanding the concepts in this section, we will understand what trend-following strategies have to do in order to be successful.  We will then be able to devise specific strategies to optimize their performance.
  • In the second part, I’m going to use insights from the first part to explain why trend-following market timing strategies perform well on aggregate indices (e.g., S&P 500, FTSE, Nikkei, etc.), but not on individual stocks (e.g., Disney, BP, Toyota, etc.). Recall that we encountered this puzzling result in the prior piece, and left it unresolved.
  • In the third part, I’m going to use insights gained from both the first and second parts to build the new strategy: Growth-Trend Timing.  I’m then going to do some simple out-of-sample tests on the new strategy, to illustrate the potential.  More rigorous testing will follow in a subsequent piece.

Before I begin, I’m going to make an important clarification on the topic of “momentum.”

Momentum: Two Versions

The market timing strategies that we analyzed in the prior piece (please read it if you haven’t already) are often described as strategies that profit off of the phenomenon of “momentum.”  To avoid confusion, we need to distinguish between two different empirical observations related to that phenomenon:

  • The first is the observation that the trailing annual returns of a security predict its likely returns in the next month.  High trailing annual returns suggest high returns in the next month, low trailing annual returns suggest low returns in the next month. This phenomenon underlies the power of the Fama-French-Asness momentum factor, which sorts the market each month on the basis of prior annual returns.
  • The second is the observation that when a security exhibits a negative price trend, the security is more likely to suffer a substantial drawdown over the coming periods than when it exhibits a positive trend.  Here, a negative trend is defined as a negative trailing return on some horizon (i.e., negative momentum), or a price that’s below a trailing moving average of some specified period.  In less refined terms, the observation holds that large losses–“crashes”–are more likely to occur after an aggregate index’s price trend has already turned downward.

red flagThese two observations are related to each other, but they are not identical, and we should not refer to them interchangeably.  Unlike the first observation, the second observation does not claim that the degree of negativity in the trend predicts anything about the future return.  It doesn’t say, for example, that high degrees of negativity in the trend imply high degrees of negativity in the subsequent return, or that high degrees of negativity in the trend increase the probability of subsequent negativity.  It simply notes that negativity in the trend–of any degree–is a red flag that substantially increases the likelihood of a large subsequent downward move.

Though the second observation is analytically sloppier than the first, it’s more useful to a market timer.  The ultimate goal of market timing is to produce equity-like returns with bond-like volatility.  In practice, the only way to do that is to sidestep the large drawdowns that equities periodically produce.  We cannot reliably sidestep drawdowns unless we know when they are likely to occur.  When are they likely to occur?  The second observation gives the answer: after a negative trend has emerged.  So if you see a negative trend, get out.

The backtests conducted in the prior piece demonstrated that strategies that exit risk assets upon signs of a negative trend tend to outperform buy and hold strategies.  Their successful avoidance of large drawdowns more than makes up for the relative losses that they incur by switching into lower-return assets.  What we saw in the backtest, however, is that this result only holds for aggregate indices.  When the strategies are used to time individual securities, the opposite result is observed–the strategies strongly underperform buy and hold, to an extent that far exceeds the level of underperformance that random timing with the same exposures would be expected to produce.

How can an approach work on aggregate indices, and then not work on the individual securities that make up those indices?  That was the puzzle that we left unsolved in the prior piece, a puzzle that we’re going to try to solve in the current piece.  The analysis will be tedious in certain places, but well worth the effort in terms of the quality of market understanding that we’re going to gain as a result.

Simple Market Timing: Stop Loss Strategy

In this section, I’m going to use the example of a stop loss strategy to illustrate the concept of “gap losses”, which are typically the main sources of loss for a market timing strategy.  

To begin, consider the following chart, which shows the price index of a hypothetical security that oscillates as a sine wave.


(Note: The prices in the above index, and all prices in this piece, are quoted and charted on a total return basis, with the accrual of dividends and interest payments already incorporated into the prices.)

The precise equation for the price index, quoted as a function of time t, is:

(1) Index(t) = Base + Amplitude * ( Sin (2 * π / Period * t ) )

The base, which specifies the midpoint of the index’s vertical oscillations, is set to 50.  The amplitude, which specifies how far in each vertical direction the index oscillates, is set to 20.  The period, which specifies how long it takes for the index to complete a full oscillation, is set to 40 days.  Note that the period, 40 days, is also the distance between the peaks.

Now, I want to participate in the security’s upside, without exposing myself to its downside.  So I’m going to arbitrarily pick a “stop” price, and trade the security using the following “stop loss” rule:

(1) If the price of the security is greater than or equal to the stop, then buy or stay long.

(2) If the price of the security is less than the stop, then sell or stay out.

Notice that the rule is bidirectional–it forces me out of the security when the security is falling, but it also forces me back into the security when the security is rising.  In doing so, it not only protect me from the security’s downside below the stop, it also ensures that I participate in any upside above the stop that the security achieves.  That’s perfect–exactly what I want as a trader.

To simplify the analysis, we assume that we’re operating in a market that satisfies the following two conditions:

Zero Bid-Ask Spread: The difference between the highest bid and the lowest ask is always infinitesimally small, and therefore negligible.  Trading fees are also negligible.

Continuous Prices:  Every time a security’s price changes from value A to value B, it passes through all values in between A and B.  If traders already have orders in, or if they’re quick enough to place orders, they can execute trades at any of the in-between values. 

The following chart shows the performance of the strategy on the above assumptions.  For simplicity, we trade only one share.


The blue line is the value of the strategy. The orange line is the stop, which I’ve arbitrarily set at a price of 48.5.  The dotted green line is the strategy’s outperformance relative to a strategy that simply buys and holds the security.  The outperformance is measured against the right y-axis.

As you can see, when the price rises above the stop, the strategy buys in at the stop, 48.5. For as long as the price remains above that level, the strategy stays invested in the security, with a value equal to the security’s price.  When the price falls below the stop, the strategy sells out at the stop, 48.5.  For as long as the price remains below that level, the strategy stays out of it, with a value steady at 48.5, the sale price.

Now, you’re probably asking yourself, “what’s the point of this stupid strategy?” Well, let’s suppose that the security eventually breaks out of its range and makes a sustained move higher.  Let’s suppose that it does something like this:


How will the strategy perform?  The answer: as the price rises above the stop, the strategy will go long the security and stay long, capturing all of the security’s subsequent upside. We show that result below:


Now, let’s suppose that the opposite happens.  Instead of breaking out and growing exponentially, the security breaks down and decays to zero, like this:


How will the strategy perform?  The answer: as the price falls below the stop, the strategy will sell out of the security and stay out of it, avoiding all of the security’s downside below the stop.


We can express the strategy’s performance in a simple equation:

(2) Strategy(t) = Max(Security Price(t), Stop)

Equation (2) tells us that the strategy’s value at any time equals the greater of either the security’s price at that time, or the stop.  Since we can place the stop wherever we want, we can use the stop loss strategy to determine, for ourselves, what our downside will be when we invest in the security.  Below the stop, we will lose nothing; above it, we will gain whatever the security gains.

Stepping back, we appear to have discovered something truly remarkable, a timing strategy that can allow us to participate in all of a security’s upside, without having to participate in any of its downside.  Can that be right?  Of course not.  Markets do not offer risk-free rewards, and therefore there must be a tradeoff somewhere that we’re missing, some way that the stop loss strategy exposes us to losses.  It turns out that there’s a significant tradeoff in the strategy, a mechanism through which the strategy can cause us to suffer large losses over time.  We can’t see that mechanism because it’s being obscured by our assumption of “continuous” prices.

Ultimately, there’s no such thing as a “continuous” market, a market where every price change necessarily entails a movement through all in-between prices.  Price changes frequently involve gaps–discontinuous jumps or drops from one price to another.  Those gaps impose losses on the strategy–called “gap losses.”

To give an example, if new information is introduced to suggest that a stock priced at 50 will soon go bankrupt, the bid on the stock is not going to pass through 49.99… 49.98… 49.97 and so on, giving each trader an opportunity to sell at those prices if she wants to. Instead, the bid is going to instantaneously drop to whatever level the stock finds its first interested buyer at, which may be 49.99, or 20.37, or 50 cents, or even zero (total illiquidity).  Importantly, if the price instantaneously drops to a level below the stop, the strategy isn’t going to be able to sell exactly at the stop.  The best it will be able to do is sell at the first price that the security gaps down to.  In the process, it will incur a “gap loss”–a loss equal to the “gap” between that price and the stop.

The worst-case gap losses inflicted on a market timing strategy are influenced, in part, by the period of time between the checks that it makes.  The strategy has to periodically check on the price, to see if the price is above or below the stop.  If the period of time between each check is long, then valid trades will end up taking place later in time, after prices have moved farther away from the stop.  The result will be larger gap losses.

Given the importance of the period between the checks, we might think that a solution to the problem of gap losses would be to have the strategy check prices continuously, at all times. But even on continuous checking, gap losses would still occur.  There are two reasons why. First, there’s a built-in discontinuity between the market’s daily close and subsequent re-opening.  No strategy can escape from that discontinuity, and therefore no strategy can avoid the gap losses that it imposes.  Second, discontinuous moves can occur in intraday trading–for example, when new information is instantaneously introduced into the market, or when large buyers and sellers commence execution of pre-planned trading schemes, spontaneously removing or inserting large bids and asks.

In the example above, the strategy checks the price of the security at the completion of each full day (measured at the close).  The problem, however, is that the stop–48.5–is not a value that the index ever lands on at the completion of a full day.  Recall the specific equation for the index:

(3) Index(t) = 50 + 20 * Sin ( 2 * π / 40 * t)

Per the equation, the closest value above 48.5 that the index lands on at the completion of a full day is 50.0, which it reaches on days 0, 20, 40, 60, 80, 100, 120, and so on.  The closest value below 48.5 that it lands on is 46.9, which it reach on days 21, 39, 61, 79, 101, 119, and so on.

It follows that whenever the index price rises above the stop of 48.5, the strategy sees the event when the price is already at 50.0.  So it buys into the security at the available price: 50.0.  Whenever the index falls below the stop of 48.5, the strategy sees the event when the price is already at 46.9.  So it sells out of the security at the available price: 46.9.  Every time the price interacts with the stop, then, a buy-high-sell-low routine ensues. The strategy buys at 50.0, sells at 46.9, buys again at 50.0, sells again at 46.9, and so on, losing the difference, roughly 3 points, on each “round-trip”–each combination of a sell followed by a buy.  That difference, the gap loss, represents the primary source of downside for the strategy.

Returning to the charts, the following chart illustrates the performance of the stop loss strategy when the false assumption of price continuity is abandoned and when gap losses are appropriately reflected:


The dotted bright green line is the buy price.  The green shaded circles are the actual buys. The dotted red line is the sell price.  The red shaded circles are the actual sells.  As you can see, with each round-trip transaction, the strategy incurs a loss relative to a buy and hold strategy equal to the gap: roughly 3 points, or 6%.

The following chart makes the phenomenon more clear.  We notice the accumulation of gap losses over time by looking at the strategy’s peaks.  In each cycle, the strategy’s peaks are reduced by an amount equal to the gap:


It’s important to clarify that the gap loss is not an absolute loss, but rather a loss relative to what the strategy would have produced in the “continuous” case, under the assumption of continuous prices and continuous checking.  Since the stop loss strategy would have produced a zero return in the “continuous” case–selling at 48.5, buying back at 48.5, selling at 48.5, buying back at 48.5, and so on–the actual return, with gap losses included, ends up being negative.

As the price interacts more frequently with the stop, more transactions occur, and therefore the strategy’s cumulative gap losses increase.  We might therefore think that it would be good for the strategy if the price were to interact with the stop as infrequently as possible.  While there’s a sense in which that’s true, the moments where the price interacts with the stop are the very moments where the strategy fulfills its purpose–to protect us from the security’s downside.  If we didn’t expect the price to ever interact with the stop, or if we expected interactions to occur only very rarely, we wouldn’t have a reason to bother implementing the strategy.

We arrive, then, at the strategy’s fundamental tradeoff.  In exchange for attempts to protect investors from a security’s downside, the strategy takes on a different kind of downside–the downside of gap losses.  When those losses fail to offset the gains that the strategy generates elsewhere, the strategy produces a negative return.  In the extreme, the strategy can whittle away an investor’s capital down to almost nothing, just as buying and holding a security might do in a worst case loss scenario.

In situations where the protection from downside proves to be unnecessary–for example, because the downside is small and self-reversing–the strategy will perform poorly relative to buy and hold.  We see that in the following chart:


In exchange for protection from downside below 48.5–downside that proved to be minor and self-reversing–the strategy incurred 12 gap losses.  Those losses reduced the strategy’s total return by more than half and saddled it with a maximum drawdown that ended up exceeding the maximum drawdown of buy and hold.

Sometimes, however, protection from downside can prove to be valuable–specifically, when the downside is large and not self-reversing.  In such situations, the strategy will perform well relative to buy and hold.  We see that in the following chart:


As before, in exchange for protection from downside in the security, the strategy engaged in a substantial number of unnecessary exits and subsequent reentries.  But one of those exits, the final one, proved to have been well worth the cumulative cost of the gap losses, because it protected us from a large downward move that did not subsequently reverse itself, and that instead took the stock to zero.

To correctly account for the impact of gap losses, we can re-write equation (2) as follows:

(4) Strategy(t) = Max(Security Price(t), Stop) – Cumulative Gap Losses(t)

What equation (4) is saying is that the strategy’s value at any time equals the greater of either the security’s price or the stop, minus the cumulative gap losses incurred up to that time.  Those losses can be re-written as the total number of round-trip transactions up to that time multiplied by the average gap loss per round-trip transaction.  The equation then becomes:

(5) Strategy(t) = Max(Security Price(t), Stop) – # of Round-Trip Transactions(t) * Average Gap Loss Per Round-Trip Transaction.

The equation is not exactly correct, but it expresses the concept correctly.  The strategy is exposed to the stock’s upside, it’s protected from the majority of the stock’s downside below the stop, and it pays for that protection by incurring gap losses on each transaction, losses which subtract from the overall return.

Now, the other assumption we made–that the difference between the bid and the ask was infinitesimally small–is also technically incorrect.  There’s a non-zero spread between the bid and the ask, and each time the strategy completes a round-trip market transaction, it incurs that spread as a loss.  Adding the associated cost to the equation, we get a more complete equation for a bidirectional stop loss strategy:

(6) Strategy(t) = Max(Security Price(t), Stop) – # of Round-Trip Transactions(t) * (Average Gap Loss Per Round-Trip Transaction + Average Bid-Ask Spread).

Again, not exactly correct, but very close.

The cumulative cost of traversing the bid-ask spread can be quite significant, particularly when the strategy checks the price frequently (i.e., daily) and engages in a large number of resultant transactions.  But, in general, the cumulative cost is not as impactful as the cumulative cost of gap losses.  And so even if bid-ask spreads could be tightened to a point of irrelevancy, as appears to have happened in the modern era of sophisticated market making, a stop loss strategy that engaged in frequent, unnecessary trades would still perform poorly

To summarize:

  • A bidirectional stop loss strategy allows an investor to participate in a security’s upside without having to participate in the security’s downside below a certain arbitrarily defined level–the stop.
  • Because market prices are discontinuous, the transactions in a stop loss strategy inflict gap losses, which are losses relative to the return that the strategy would have produced under the assumption of perfectly continuous prices.  Gap losses represent the primary source of downside for a stop loss strategy.
  • Losses associated with traversing the bid-ask spread are also incurred on each round-trip transaction.  In most cases, their impacts on performance are not as pronounced as the impacts of gap losses.

A Trailing Stop Loss Strategy: Otherwise Known As…

In this section, I’m going to introduce the concept of a trailing stop loss strategy.  I’m going to show how the different trend-following market timing strategies that we examined in the prior piece are just different ways of implementing that concept.  

The stop loss strategy that we introduced in the previous section is able to protect us from downside, but it isn’t able to generate sustained profit.  The best that it can hope to do is sell at the same price that it buys in at, circumventing losses, but never actually achieving any durable gains.


The circumvention of losses improves the risk-reward proposition of investing in the security, and is therefore a valid contribution.  But we want more.  We want total return outperformance over the index.

For a stop loss strategy to give us that, the stop cannot stay still, stuck at the same price at all times.  Rather, it needs to be able to move with the price.  If the price is above the stop and rises, the stop needs to be able to rise as well, so that any subsequent sale occurs at higher prices.  If the price is below the stop and falls, the stop needs to be able to fall as well, so that any subsequent purchase occurs at lower prices.  If the stop is able to move in this way, trailing behind the price, the strategy will lock in any profits associated with favorable price movements.  It will sell high and buy back low, converting the security’s oscillations into relative gains on the index.

The easiest way to implement a trailing stop loss strategy is to set the stop each day to a value equal to yesterday’s closing price.  So, if yesterday’s closing price was 50, we set the stop for today–in both directions–to be 50.  If the index is greater than or equal to 50 at the close today, we buy in or stay long.  If the index is less than 50, we sell or stay out.  We do the same tomorrow and every day thereafter, setting the stop equal to whatever the closing price was for the prior day.  The following chart shows what our performance becomes:


Bingo!  The performance ends up being fantastic.  In each cycle, the price falls below the trailing stop near the index peak, around 70, triggering a sell.  The price rises above the trailing stop near the index trough, around 30, triggering a buy.  As the sine wave moves through its oscillations, the strategy sells at 70, buys at 30, sells at 70, buys at 30, over and over again, ratcheting up a 133% gain on each completed cycle.  After 100 days, the value of the strategy ends up growing to almost 7 times the value of the index, which goes nowhere.

In the following chart, we increase the the trailing period to 7 days, setting the stop each day to the security’s price seven days ago:


The performance ends up being good, but not as good.  The stop lags the index by a greater amount, and therefore the index ends up falling by a greater amount on its down leg before moving down through the stop and triggering a sell.  Similarly, the index ends up rising by a greater amount on its up leg before moving up through the stop and triggering a buy.  The strategy isn’t able to sell as high or buy back as low as in the 1 day case, but it still does well.

The following is a general rule for a trailing stop loss strategy:

  • If the trailing period between the stop and the index is increased, the stop will end up lagging the index by a greater amount, capturing a smaller portion of the up leg and down leg of the index’s oscillation, and generating less outperformance over the index.
  • If the trailing period between the stop and the index is reduced, the stop will end up hugging the index more closely, capturing a greater portion of the up leg and down leg of the index’s oscillation, and generating more outperformance over the index.

Given this rule, we might think that the way to optimize the strategy is to always use the shortest trailing period possible–one day, one minute, one second, however short we can get it, so that the stop hugs the index to the maximum extent possible, capturing as much of the index’s upward and downward “turns” as it can.  This, of course, is true for an idealized price index that moves as a perfect, squeaky clean sine wave.  But as we will later see, using a short trailing period to time a real price index–one that contains the messiness of random short-term volatility–will increase the number of unnecessary interactions between the index and the stop, and therefore introduce new gap losses that will tend to offset the timing benefits.

Now, let’s change the trailing period to 20 days.  The following chart shows the performance:


The index and the stop end up being a perfect 180 degrees out of phase with each other, with the index crossing the stop every 20 days at a price of 50.  We might therefore think that the strategy will generate a zero return–buying at 50, selling at 50, buying at 50, selling at 50, buying at 50, selling at 50, and so on ad infinitum.  But what are we forgetting?  Gap losses.  As in the original stop loss case, they will pull the strategy into a negative return.

The trading rule that defines the strategy has the strategy buy in or stay long when the price is greater than or equal to the trailing stop, and sell out or stay in cash when the price is less than the trailing stop.  On the up leg, the price and the stop cross at 50, triggering a buy at 50.  On the down leg, however, the first level where the strategy realizes that the price is less than the stop, 50, is not 50.  Nor is it 49.99, or some number close by.  It’s 46.9.  The strategy therefore sells at 46.9.  On each oscillation, it repeats: buying at 50, selling at 46.9, buying at 50, selling at 46.9, accumulating a loss equal to the gap on each completed round-trip.  That’s why you see the strategy’s value (blue line) fall over time, even though the index (black line) and the stop (orange line) cross each other at the exact same point (50) in every cycle.

Now, to be clear, the same magnitude of gap losses were present earlier, when we set the stop’s trail at 1 day and 7 days.  The difference is that we couldn’t see them, because they were offset by the large gains that the strategy was generating through its trading.  On a 20 day trail, there is zero gain from the strategy’s trading–the index and the stop cross at the same value every time, 50–and so the gap losses show up clearly as net losses for the strategy.  Always remember: gap losses are not absolute losses, but losses relative to what a strategy would have produced on the assumption of continuous prices and continuous checking.

Now, ask yourself: what’s another name for the trailing stop loss strategy that we’ve introduced here?  The answer: a momentum strategy.  The precise timing rule is:

(1) If the price is greater than or equal to the price N days ago, then buy or stay long.

(2) If the price is less than the price N days ago, then sell or stay out.

This rule is functionally identical to the timing rule of a moving average strategy, which uses averaging to smooth out single-point noise in the stop:

(1) If the price is greater than or equal to the average of the last N day’s prices, then buy or stay long.

(2) If the price is less than the average of the last N day’s prices, then sell or stay out.

The takeaway, then, is that the momentum and moving average strategies that we examined in the prior piece are nothing more than specific ways of implementing the concept of a trailing stop loss.  Everything that we’ve just learned about that concept extends directly to their operations.

Now, to simplify, we’re going to ignore the momentum strategy from here forward, and focus strictly on the moving average strategy.  We will analyze the difference between the two strategies–which is insignificant–at a later point in the piece.

To summarize:

  • We can use a stop loss strategy to extract investment outperformance from an index’s oscillations by setting the stop to trail the index.
  • When the stop of a trailing stop loss strategy is set to trail very closely behind the index, the strategy will capture a greater portion of the upward and downward moves of the index’s oscillations.  All else equal, the result will be larger trading gains.  But all else is not always equal.  The larger trading gains will come at a significant cost, which we have not yet described in detail, but will discuss shortly.
  • The momentum and moving average strategies that we examined in the prior piece are nothing more than specific ways of implementing the concept of a trailing stop loss. Everything that we’ve learned about that concept extends directly over to their operations.

Determinants of Performance: Captured Downturns and Whipsaws

In this section, I’m going to explain how the moving average interacts with the price to produce two types of trades for the strategy: a profitable type, called a “captured downturn”, and an unprofitable type, called a “whipsaw.”  I’m going to introduce a precise rule that we can use to determine whether a given oscillation will lead to a captured downturn or a whipsaw.  

Up to now, we’ve been modeling the price of a security as a single sine wave with no vertical trend.  That’s obviously a limited simplification.  To take the analysis further, we need a more accurate model.

To build such a model, we start with a security’s primary fundamental: its potential payout stream, which, for an equity security, is its earnings stream.  Because we’re working on a total return basis, we assume that the entirety of the stream is retained internally. The result is a stream that grows exponentially over time.  We set the stream to start at 100, and to grow at 6% per year:


To translate the earnings into a price, we apply a valuation measure: a price-to-earnings (P/E) ratio, which we derive from an earnings yield, the inverse of a P/E ratio.  To model cyclicality in the price, we set the earnings yield to oscillate in sinusoidal form with a base or mean of 6.25% (inverse: P/E of 16), and a maximum cyclical deviation of 25% in each direction.   We set the period of the oscillation to be 7 years, mimicking a typical distance between business cycle peaks.  Prices are quoted on a monthly basis, as of the close:


The product of the security’s earnings and price-to-earnings ratio is just the security’s price index.  That index is charted below:


Admittedly, the model is not a fully accurate approximation of real security prices, but it’s adequate to illustrate the concepts that I’m now going to try to illustrate.  The presentation may at times seem overdone, in terms of emphasizing the obvious, but the targeted insights are absolutely crucial to understanding the functionality of the strategy, so the emphasis is justified.

In the chart below, we show the above price index with a 10 month moving average line trailing behind it in orange, and a 60 month moving average line trailing behind it in purple:


We notice two things.  First, the 10 month moving average line trails closer to the price than the 60 month moving average line.  Second, the 10 month moving average line responds to price changes more quickly than the 60 month moving average line.

To understand why the 10 month moving average trails closer and responds to changes more quickly than the 60 month moving average, all we need to do is consider what happens in a moving average over time.  As each month passes, the last monthly price in the average falls out, and a new monthly price, equal to the price in the most recent month (denoted with a * below), is introduced in.

The following chart shows this process for the 10 month moving average:  boxes10

As each month passes, the price in box 10 (from 10 months ago) is thrown out.  All of the prices shift one box to the left, and the price in the * box goes into box 1.

The following chart shows the same process for the 60 month moving average.  Note that the illustration is abbreviated–we don’t show all 60 numbers, but abbreviate with the “…” insertion:


A key point to remember here is that the prices in the index trend higher over time.  More recent prices therefore tend to be higher in value than more distant prices.  The price from 10 months ago, for example, tends to be higher than the price from 11 months, 12 months, 13 months ago, …, and especially the price from 57 months, 58 months ago, 59 months ago, and so on.  Because the 60 month moving average has more of those older prices inside its “average” than the 10 month moving average, its value tends to trail (i.e., be less than) the current price by a greater amount.

The 60 month moving average also has a larger quantity of numbers inside its “average” than the 10 month moving average–60 versus 10.  For that reason, the net impact on the average of tossing a single old number out, and bringing a single new number in–an effect that occurs once each month–tends to be less for the 60 month moving average than for the 10 month moving average.  That’s why the 60 month moving average responds more slowly to changes in the price.  The changes are less impactful to its average, given the larger number of terms contained in that average.

These two observations represent two fundamental insights about the the relationship between the period (length) of a moving average and its behavior.  That relationship is summarized in the bullets and table below:

  • As the period (length) of a moving average is reduced, the moving average tends to trail closer to the price, and to respond faster to changes in the price.
  • As the period (length) of a moving average is increased, the moving average tends to trail farther away from the price, and to respond more slowly to changes in the price.

The following chart illustrates the insights for the 10 and 60 month cases:


With these two insights in hand, we’re now ready to analyze the strategy’s performance. The following chart shows the performance of the 10 month moving average strategy on our new price index.  The value of the strategy is shown in blue, and the outperformance over buy and hold is shown dotted in green (right y-axis):


The question we want to ask is: how does the strategy generate gains on the index? Importantly, it can only generate gains on the index when it’s out of the index–when it’s invested in the index, its return necessarily equals the index’s return.  Obviously, the only way to generate gains on the index while out of the index is to sell the index and buy back at lower prices.  That’s what the strategy tries to do.

The following chart shows the sales and buys, circled in red and green respectively:


As you can see, the strategy succeeds in its mission: it sells high and buys back low.  For a strategy to be able to do that, something very specific has to happen after the sales.  The price needs to move down below the sale price, and then, crucially, before it turns back up, it needs to spend enough time below that price to bring the moving average line down with it.  Then, when it turns back up and crosses the moving average line, it will cross at a lower point, causing the strategy to buy back in at a lower price than it sold at.  The following chart illustrates with annotations:


Now, in the above drawing, we’ve put the sells and buys exactly at the points where the price crosses over the moving average, which is to say that we’ve shown the trading outcome that would ensue if prices were perfectly continuous, and if our strategy were continuously checking them.  But prices are not perfectly continuous, and our strategy is only checking them on a monthly basis.  It follows that the sells and buys are not going to happen exactly at the crossover points–there will be gaps, which will create losses relative to the continuous case.  For a profit to occur on a round-trip trade, then, not only will the moving average need to get pulled down below the sale price, it will need to get pulled down by an amount that is large enough to offset the gap losses that will be incurred.

As we saw earlier, in the current case, the 10 month moving average responds relatively quickly to the changes in the price, so when the price falls below the sale price, the moving average comes down with it.  When the price subsequently turns up, the moving average is at a much lower point.  The subsequent crossover therefore occurs at a much lower point, a point low enough to offset inevitable gap losses and render the trade profitable.

Now, it’s not guaranteed that things will always happen in this way.  In particular, as we saw earlier, if we increase the moving average period, the moving average will respond more slowly to changes in the price.  To come down below the sale price, it will need the price to spend more time at lower values after the sale.  The price may well turn up before that happens.  If it does, then the strategy will not succeed in buying at a lower price.

To see the struggle play out in an example, let’s look more closely at the case where the 60 month moving average is used.  The following chart shows the performance:


As you can see, the strategy ends up underperforming.  There are two aspects to the underperformance.

  • First, because the moving average trails the price by such a large amount, the price ends up crossing the moving average on the down leg long after the peak is in, at prices that are actually very close to the upcoming trough.  Only a small portion of the downturn is therefore captured for potential profit.
  • Second, because of the long moving average period, which implies a slow response, the moving average does not come down sufficiently after the sales occur.  Therefore, when the price turns back up, the subsequent crossover does not occur at a price that is low enough to offset the gap losses incurred in the eventual trade that takes place.

On this second point, if you look closely, you will see that the moving average actually continues to rise after the sales.  The subsequent crossovers and buys, then, are triggered at higher prices, even before gap losses are taken into consideration.  The following chart illustrates with annotations:


The reason the moving average continues to rise is that it’s throwing out very low prices from five years ago (60 months), and replacing them with newer, higher prices from today. Even though the newer prices have fallen substantially from their recent peak, they are still much higher than the older prices that they are replacing.  So the moving average continues to drift upward.  When the price turns back up, it ends up crossing the moving average at a higher price than where the sale happened at, completing an unprofitable trade (sell high, buy back higher), even before the gap losses are added in.

In light of these observations, we can categorize the strategy’s trades into two types: captured downturns and whipsaws.

  • In a captured downturn, the price falls below the moving average, triggering a sale. The price then spends enough time at values below the sale price to pull the moving average down below the sale price.  When the price turns back up, it crosses the moving average at a price below the sale price, triggering a buy at that price.  Crucially, the implied profit in the trade exceeds the gap losses and any other costs incurred, to include the cost of traversing the bid-ask spread.  The result ends up being a net gain for the strategy relative to the index.  This gain comes in addition to the risk-reduction benefit of avoiding the drawdown itself.
  • In a whipsaw, the price falls below the moving average, triggering a sale.  It then turns back up above the moving average too quickly, without having spent enough time at lower prices to pull the moving average down sufficiently below the sale price.  A subsequent buy is then triggered at a price that is not low enough to offset gap losses (and other costs) incurred in the transaction.  The result ends up being a net loss for the strategy relative to the index.  It’s important to once again recognize, here, that the dominant component of the loss in a typical whipsaw is the gap.  In a perfectly continuous market, where gap losses did not exist, whipsaws would typically cause the strategy to get out and get back in at close to the same prices.

Using these two trade categories, we can write the following equation for the performance of the strategy:

(7) Strategy(t) = Index(t) + Cumulative Gains from Captured Downturns(t) – Cumulative Losses from Whipsaws(t)

What equation (7) is saying is that the returns of the strategy at any given time equal the value of the index (buy and hold) at that time, plus the sum total of all gains from captured downturns up to that time, minus the sum total of all losses from whipsaws up to that time.  Note that we’ve neglected potential interest earned while out of the security, which will slightly boost the strategy’s return.

Now, there’s a simple thumbrule that we can use to determine whether the strategy will produce a captured downturn or a whipsaw in response to a given oscillation.  We simply compare the period of the moving average to the period of the oscillation.  If the moving average period is substantially smaller than the oscillation period, the strategy will produce a captured downturn and a profit relative to the index, with the profit getting larger as the moving average period gets smaller.  If the moving average period is in the same general range as the oscillation period–or worse, greater than the oscillation period–then the strategy will produce a whipsaw and a loss relative to the index.

Here’s the rule in big letters (note: “<<” means significantly less than):


To test the rule in action, the following chart shows the strategy’s outperformance on the above price index using moving average periods of 70, 60, 50, 40, 30, 20, 10, 5, and 1 month(s):


As you can see, a moving average period of 1 month produces the greatest outperformance. As the moving average period is increased from 1, the outperformance is reduced.  As the moving average period is increased into the same general range as the period of the price oscillation, 84 months (7 years), the strategy begins to underperform.

The following chart shows what happens if we set the oscillation period of the index to equal the strategy’s moving average period (with both set to a value of 10 months):


The performance devolves into a cycle of repeating whipsaws, with ~20% losses on each iteration.  Shockingly, the strategy ends up finishing the period at a value less than 1% of the value of the index.  This result highlights the significant risk of using a trend-following strategy–large amounts of money can be lost in the whipsaws, especially as they compound on each other over time.

Recall that for each of the markets backtested in the prior piece, I presented tables showing the strategy’s entry dates (buy) and exit dates (sell).  The example for U.S. equities is shown below (February 1928 – November 2015):


We can use the tables to categorize each round-trip trade as a captured downturn or a whipsaw.  The simple rule is:

  • Green Box = Captured Downturn
  • Red Box = Whipsaw

An examination of the tables reveals that in equities, whipsaws tend to be more frequent than captured downturns, typically by a factor of 3 or 4.  But, on a per unit basis, the gains of captured downturns tend to be larger than the losses of whipsaws, by an amount sufficient to overcome the increased frequency of whipsaws, at least when the strategy is acting well.

Recall that in the daily momentum backtest, we imposed a relative large 0.6% slip (loss) on each round-trip transaction.  As we explained in a related piece on daily momentum, we used that slip in order to correctly model the average cost of traversing the bid-ask spread during the tested period, 1928 – 2015.  To use any other slip would be to allow the strategy to transact at prices that did not actually exist at the time, and we obviously can’t do that in good faith.

Now, if you look at the table, you will see that the average whipsaw loss across the period was roughly 6.0%.  Of that loss, 0.6% is obviously due to the cost of traversing bid-ask spread.  We can reasonably attribute the other 5.4% to gap losses.  So, using a conservative estimate of the cost of traversing the bid-ask spread, the cost of each gap loss ends up being roughly 9 times as large as the cost of traversing the bid-ask spread.  You can see, then, why we’ve been emphasizing the point that gap losses are the more important type of loss to focus on.

To finish off the section, let’s take a close-up look at an actual captured downturn and whipsaw from a famous period in U.S. market history–the Great Depression.  The following chart shows the 10 month moving average strategy’s performance in U.S. equities from February 1928 to December 1934, with the entry-exit table shown on the right:


The strategy sells out in April 1930, as the uglier phase of the downturn begins.  It buys back in August 1932, as the market screams higher off of the ultimate Great Depression low.  Notice how large the gap loss ends up being on the August 1932 purchase.  The index crosses the moving average at around 0.52 in the early part of the month (1.0 equals the price in February 1928), but the purchase on the close happens at 0.63, roughly 20% higher.  The gap loss was large because the market had effectively gone vertical at the time. Any amount of delay between the crossover and the subsequent purchase would have been costly, imposing a large gap loss.

The strategy exits again in February 1933 as the price tumbles below the moving average. That exit proves to be a huge mistake, as the market rockets back up above the moving average over the next two months.  Recall that March 1933 was the month in which FDR took office and began instituting aggressive measures to save the banking system (a bank holiday, gold confiscation, etc.).  After an initial scare, the market responded very positively.  As before, notice the large size of the gap loss on both the February sale and the March purchase.  If the strategy had somehow been able to sell and then buy back exactly at the points where the price theoretically crossed the moving average, there would hardly have been any loss at all for the strategy.  But the strategy can’t buy at those prices–the market is not continuous.

To summarize:

  • On shorter moving average periods, the moving average line trails the price index by a smaller distance, and responds more quickly to its movements.
  • On longer moving average periods, the moving average line trails the price index by a larger distance, and responds more slowly to its movements.
  • For the moving average strategy to generate a relative gain on the index, the price must fall below the moving average, triggering a sale.  The price must then spend enough time below the sale price to bring the moving average down with it, so that when the price subsequently turns back up, it crosses the moving average at a lower price than the sale price.  When that happens by an amount sufficient to offset gap losses (and other costs associated with the transaction), we say that a captured downturn has occurred.  Captured downturns are the strategy’s source of profit relative to the index.
  • When the price turns back up above the moving average too quickly after a sale, triggering a buy without having pulled the moving average line down by a sufficient amount to offset the gap losses (and other costs incurred), we call that a whipsaw. Whipsaws are the strategy’s source of loss relative to the index.
  • Captured downturns occur when the strategy’s moving average period is substantially less than the period of the oscillation that the strategy is attempting to time.  Whipsaws occur whenever that’s not the case.
  • The strategy’s net performance relative to the index is determined by the balance between the effects of captured downturns and the effects of whipsaws.

Tradeoffs: Changing the Moving Average Period and the Checking Period

In this section, I’m going to add an additional component to our “growing sine wave” model of prices, a component that will make the model into a genuinely accurate approximation of real security prices.  I’m then going to use the improved model to explain the tradeoffs associated with changing (1) the moving average period and (2) the checking period.  

In the previous section, we modeled security prices as a combination of growth and cyclicality: specifically, an earnings stream growing at 6% per year multiplied by a P/E ratio oscillating as an inverse sine wave with a period of 7 years.  The resultant structure, though useful to illustrate concepts, is an unrealistic proxy for real prices–too clean, too smooth, too easy for the strategy to profitable trade.

To make the model more realistic, we need to add short-term price movements to the security that are unrelated to its growth or long-term cyclicality.  There are a number of ways to do that, but right now, we’re going to do it by adding random short-term statistical deviations to both the growth rate and the inverse sine wave.  Given the associated intuition, we will refer to those deviations as “volatility”, even though the term “volatility” has a precise mathematical meaning that may not always accurately apply.


The resultant price index, shown below, ends up looking much more like the price index of an actual security.  Note that the index was randomly generated:


Now would probably a good time to briefly illustrate the reason why we prefer the moving average strategy to the momentum strategy, even though the performances of the two strategies are statistically indistinguishable.  In the following chart, we show the index trailed by a 15 month momentum line (hot purple) and a 30 month moving average line (orange).  The periods have been chosen to create an overlap:


As you can see, the momentum (MOM) line is just the price index shifted to the right by 15 months.  It differs from the moving average (MA) line in that it retains all of the index’s volatility.  The moving average line, in contrast, smooths that volatility away by averaging.

Now, on an ex ante basis, there’s no reason to expect the index’s volatility, when carried over into the momentum line, to add value to the timing process.  Certainly, there will be individual cases where the specific movements in the line, by chance, help the strategy make better trades.  But, statistically, there will be just as many cases where those movements, by chance, will cause the strategy to make worse trades.   This expectation is confirmed in actual testing.  Across a large universe of markets and individual stocks, we find no statistically significant difference between the performance results of the two strategies.

For convenience, we’ve chosen to focus on the strategy that has the simpler, cleaner look–the moving average strategy.  But we could just as easily have chosen the momentum strategy.  Had we done so, the same insights and conclusions would have followed.  Those insights and conclusions apply to both strategies without distinction.


Now, there are two “knobs” that we can turn to influnence the strategy’s performance.  The first “knob” is the moving average period, which we’ve already examined to some extent, but only on a highly simplified model of prices.  The second “knob” is the period between the checks, i.e., the checking period, whose effect we have yet to examine in detail.  In what follows, we’re going to examine both, starting with the moving average period.  Our goal will be to optimize the performance of the strategy–“tweak” it to generate the highest possible relative gain on the index.

We start by setting the moving average period to a value of 30 months.  The following chart shows the strategy’s performance:


As you can see, the strategy slightly outperforms.  It doesn’t make sense for us to use 30 months, however, because we know, per our earlier rule, that shorter moving average periods will capture more of the downturns and generate greater outperformance:

index34So we shorten the moving average period to 20 months, expecting a better return.  The following chart shows the performance:


As we expected, the outperformance increases.  Using a 30 month moving average period in the prior case, the outperformance came in at 1.13 (dotted green line, right y-axis).  In reducing the period to 20 months, we’ve increased the outperformance to 1.20.

Of course, there’s no reason to stop at 20 months.  We might as well shorten the moving average period to 10 months, to improve the performance even more.  The following chart shows the strategy’s performance using a 10 month moving average period:


Uh-oh!  To our surprise, the performance gets worse, not better, contrary to the rule.

What’s going on?

Before we investigate the reasons for the deterioration in the performance, let’s shorten the moving average period further.  The following two charts show the performances for moving average periods of 5 months and 1 month respectively:


As you can see, the performance continues to deteriorate.  Again, this result is not what we expected.  In the prior section, when we shortened the moving average period, the performance got better. The strategy captured a greater portion of the cyclical downturns, converting them into larger gains on the index.  Now, when we shorten the moving average period, the performance gets worse.  What’s happening?

Here’s the answer.  As we saw earlier, when we shorten the moving average period, we cause the moving average to trail (hug) more closely to the price.  In the previous section, the price was a long, clean, cyclical sine wave, with no short-term volatility that might otherwise create whipsaws, so the shortening improved the strategy’s performance.  But now, we’ve added substantial short-term volatility to the price–random deviations that are impossible for the strategy to successfully time.  At longer moving average periods–30 months, 20 months, etc.–the moving average trails the price by a large amount, and therefore never comes into contact with that volatility.  At shorter moving average periods, however, the moving average is pulled in closer to the price, where it comes into contact with the volatility.  It then suffers whipsaws that it would not otherwise suffer, incurring gap losses that it would not otherwise incur.

Of course, it’s still true that shortening the moving average period increases the portion of the cyclical downturns that the strategy captures, so there’s still that benefit.  But the cumulative harm of the additional whipsaws introduced by the shortening substantially outweighs that benefit, leaving the strategy substantially worse off on net.

The following two charts visually explain the effect of shortening the moving average period from 30 months to 5 months:


If the “oscillations” associated with random short-term price deviations could be described as having a period (in the way that a sine wave would have a period), the period would be very short, because the oscillations tend to “cycle” back and forth very quickly.  Given our “MA Period << Oscillation Period” rule, then, it’s extremely difficult for a timing strategy to profitably time the oscillations.  In practice, the oscillations almost always end up producing whipsaws.

Ultimately, the only way for the strategy to avoid the implied harm of random short-term price deviations is to avoid touching them.  Strategies that use longer moving average periods are more able to do that than strategies that use shorter ones, which is why strategies that use longer moving average periods often outperform, even though they aren’t as effective at converting downturns into gains.

The following table describes the fundamental tradeoff associated with changing the moving average period.  Green is good for performance, red is bad:


As the table illustrates, when we shorten the moving average period, we increase both the good stuff (captured downturns) and the bad stuff (whipsaws).  When we lengthen the moving average period, we reduce both the good stuff (captured downturns) and the bad stuff (whipsaws).

Ultimately, optimizing the strategy is about finding the moving average period that brings the moving average as close as possible to the price, so that it maximally captures tradeable cyclical downturns, but without causing it to get so close to the price that it comes into contact with untradeable short-term price volatility.  We can imagine the task as being akin to the task of trying to pull a rose out of a rose bush.  We have to reach into the bush to pull out the rose, but we don’t want to reach so deep that we end up getting punctured by thorns.


Now, the index that we built is quoted on a monthly basis, at the close.  If we wanted to, we could change the period between the quotes from one month to one day–or one hour, or one minute, or one second, or less.  Doing that would allow us to reduce the moving average period further, so that we capture cyclical downturns more fully than we may have otherwise been capturing them.  But it would also bring us into contact with short-term volatility that we were previously unaware of and unexposed to, volatility that will increase our whipsaw losses, potentially dramatically.

We’re now ready to examine the strategy’s second “knob”, the period of time that passes between the checks, called the “checking period.”  In the current case, we set the checking period at one month.  But we could just as easily have set it at one year, five days, 12 hours, 30 minutes, 1 second, and so on–the choice is ours, provided that we have access to price quotes on those time scales.

The effects of changing the checking period are straightforward.  Increasing the checking period, so that the strategy checks less frequently, has the effect of reducing the quantity of price information that the strategy has access to.  The impact of that reduction boils down to the impact of having the strategy see or not see the information:

  • If the information is useful information, the type that the strategy stands to benefit from transacting on, then not seeing the information will hinder the strategy’s performance. Specifically, it will increase the gap losses associated with each transaction.  Prices will end up drifting farther away from the moving average before the prescribed trades take place.
  •  If the information is useless information, the type that the strategy does not stand to benefit from transacting on, then not seeing the information will improve the strategy’s performance.  The strategy will end up ignoring short-term price oscillations that would otherwise entangle it in whipsaws.

The converse is true for reducing the checking period, i.e., conducting checks more frequently.  Reducing the checking period has the effect of increasing the quantity of price information that the strategy has access to.  If the information is useful, then the strategy will trade on it more quickly, reducing the extent to which the price “gets away”, and therefore reducing gap losses.  If the information is useless, then it will push the strategy into additional whipsaws that will detract from performance.

The following table illustrates the tradeoffs associated with changing the checking period.  As before, green is good for performance, red is bad:


The following two charts show the performances of the strategy in the total U.S. equity market index from January of 1945 to January of 1948.  The first chart runs the strategy on the daily price index (checking period of 1 day) using a 100 day moving average (~5  months).  Important captured downturns are circled in green, and important whipsaws are circled in red:


The second chart runs the strategy on the monthly price index (checking period of 1 month) using a 5 month moving average.


Notice the string of repeated whipsaws that occur in the left part of the daily chart, around the middle of 1945 and in the early months of 1946.  The monthly strategy completely ignores those whipsaws.  As a result, across the entire period, it ends up suffering only 2 whipsaws.  The daily strategy, in contrast, suffers 14 whipsaws.  Crucially, however, those whipsaws come with average gap losses that are much smaller than the average gap losses incurred on the whipsaws in the monthly strategy.  In the end, the two approaches produce a result that is very similar, with the monthly strategy performing slightly better.

Importantly, on a daily checking period, the cumulative impact of losses associated with traversing the bid-ask spread become significant, almost as significant as the impact of gap losses, which, of course, are smaller on a daily checking period.  That’s one reason why the monthly strategy may be preferable to the daily strategy.  Unlike in the daily strategy, in the monthly strategy we can accurately model large historical bid-ask spreads without destroying performance.

We see this in the following two charts, which show the performance of the daily strategy from February 1928 to July 2015 using (1) zero bid-ask spread losses and (2) bid-ask spread losses equal to the historical average of roughly 0.6%:


The impact on the strategy ends up being worth 1.8%.  That contrasts with an impact of 0.5% for the monthly strategy using the equivalent 10 month moving average preference of the strategy’s inventor, Mebane Faber.  We show the results of the monthly strategy for 0% bid-ask spread losses and 0.6% bid-ask spread losses in the two charts below, respectively:

totusmkt1928noslip totusmkt192860slip

From looking at the charts, the daily version of the strategy appears to be superior.  But it’s hard to be confident in the results of the daily version, because in the last several decades, its performance has significantly deteriorated relative to the performance seen in prior eras of history.  The daily version generated frequent whipsaws and no net gains in the recent downturns of 2000 and 2008, in contrast with the substantial gains that it generated in the more distant downturns of 1929, 1937, 1970 and 1974.  The deterioration may be related to the deterioration of the one day momentum strategy (see a discussion here), whose success partially feeds into the success of all trend-following strategies that conduct checks of daily prices.

To summarize:

  • For the moving average period:


  • For the checking period:


Aggregate Indices vs. Individual Securities: Explaining the Divergent Results

In this section, I’m going to use insights gained in previous sections to explain why the trend-following strategies that we tested in the prior piece work well on aggregate indices, but not on the individual securities that make up those indices.   Recall that this was a key result that we left unresolved in the prior piece.  

To understand why the momentum and moving average strategies that we tested in the prior piece work on aggregate indices, but not on the individual securities that make up those indices, we return to our earlier decomposition of stock prices.  An understanding of how the phenomenon of indexing differentially affects the two latter variables in the decomposition–cyclicality and volatility–will give the answer.


We first ask, what effect does increasing each component of the decomposition, while leaving all other components unchanged, have on the strategy’s performance?  We first look at growth.

Growth: Increasing the growth, which is the same thing as postulating higher future returns for the index, tends to impair the strategy’s performance.  The higher growth makes the it harder for the strategy to capture downturns–the downturns themselves don’t go as far down, because the growth is pushing up on them to a greater extent over time.  The subsequent buys therefore don’t happen at prices as low as they might otherwise happen, reducing the gain on the index.

Importantly, when we’re talking about the difference between annual growth numbers of 6%, 7%, 8%, and so on, the effect of the difference on the strategy’s performance usually doesn’t become significant. It’s only at very high expected future growth rates–e.g., 15% and higher–that the growth becomes an impeding factor that makes successful timing difficult.  When the expected future return is that high–as it was at many past bear market lows–1932, 1941, 1974, 2009, etc.–it’s advisable to abandon market timing altogether, and just focus on being in for the eventual recovery.   Market timing is something you want to do when the market is expensive and when likely future returns are weak, as they unquestionable are right now.

Cyclicality:  Admittedly, I’ve used the term “cyclicality” somewhat sloppily in the piece. My intent is to use the term to refer to the long, stretched-out oscillations that risk assets exhibit over time, usually in conjunction with the peaks and troughs of the business cycle, which is a multi-year process.  When the amplitude of these oscillations is increased, the strategy ends up capturing greater relative gains on the downturns.  The downturns end up going farther down, and therefore after the strategy exits the index on their occurrence, it ends up buying back at prices that have dropped by a larger amount, earning a greater relative gain on the index.

In the following slideshow, we illustrate the point for the 10 month moving average strategy (click on any image for a carousel to open).  We set the volatility at a constant value of 2% (ignore the unit for now), and increase the amplitude of the 7 year sinusoidal oscillation in the earnings yield from 10% to 50% of the sine wave’s base or height:

As you can see, the strategy gains additional outperformance on each incremental increase in the sine wave’s amplitude, which represents the index’s “cyclicality.”  At a cyclicality of 10%, which is almost no cyclicality at all, the strategy underperforms the index by almost 80%, ending at a laughable trailing return ratio of 0.2.  At a cyclicality of 50%, in contrast, the strategy outperforms the index by a factor of 9 times.

Volatility:  I’ve also used the term “volatility” somewhat sloppily.  Though the term has a defined mathematical meaning, its associated with intuition of “choppiness” in the price. In using the term, my intention is to call up that specific intuition, which is highly relevant to the strategy’s performance.

Short-term volatility produces no net directional trend in price over time, and therefore it cannot be successfully timed by a trend-following strategy.  When a trend-following strategy comes into contact with it, the results are useless gap losses and bid-ask traversals, both of which detracts substantially from performance.  The following slideshow illustrates the point.  We hold the cyclicality at 25%, and dial up the volatility from 2% to 5% (again, ignore the meaning of those specific percentages, just focus on the fact that they are increasing):

As the intensity of the volatility–the “choppiness” of the “chop”–is dialed up, the moving average comes into contact with a greater portion of the volatility, for more whipsaws. Additionally, each whipsaw comes with larger gap losses, as the price overshoots the moving average by a larger amount on each cross.  In combination, these two hits substantially reduce the strategy’s performance.

We can understand the strategy’s performance as a tug of war between cyclicality and volatility.  Cyclicality creates large sustained downturns that the strategy can profitably capture.  Volatility, in contrast, creates whipsaws that the strategy suffers from being involved with.  When cyclicality proves to be the greater presence, the strategy outperforms.  When volatility proves to be the greater presence, the strategy underperforms.


Now, to the reason why indexing improves the performance of the strategy.  The reason is this.  Indexing filters out the volatility contained in individual securities, while preserving their cyclicality.

When an index is built out of a constituent set of securities, the price movements that are unique to the individual consistituents tend to get averaged down.  One security may be fluctuating one way, but if the others are fluctuating another way, or are not fluctuating at all, the fluctuations in the original security will get diluted away in the averaging. The price movements that are common to all of the constituents, in contrast, will not get diluted away, but will instead get pulled through into the overall index average, where they will show up in full intensity.

In risk assets, the cyclicality associated with the business cycle tends to show up in all stocks and risky securities.  All stocks and risky securities fall during recessions and rise during recoveries.  That cyclicality, then, tends to show up in the index.  Crucially, however, the short-term volatility that occurs in the individual securities that make up the index–the short-term movements associated with names like Disney, Caterpillar, Wells Fargo, Exxon Mobil, and so on, where each movement is driven by a different, unrelated story–does not tend to show up in the index.

The 100 individual S&P 500 securities that we tested in the prior piece exhibit substantial volatility–collectively averaging around 30% for the 1963 to 2015 period.  Their cyclicality–their tendency to rise and fall every several years in accordance with the ups and downs of the business cycle–is not enough to overcome this volatility, and therefore the strategy tends to trade unprofitably.  But when the movements of all of the securities are averaged together into an index–the S&P 500–the divergent movements of the individual securities dilute away, reducing the volatility of the index by half, to a value of 15%.  The cyclicality contained in each individual constituent, however, is fully retained in the index.  The preserved cyclicality comes to dominate the diminished volatility, allowing the strategy to trade profitably.

We illustrate this process below.  The six different securities in the following six charts (click for a slideshow) combine a common cyclicality (long-term oscillations on a 7 year cycle) with large amounts of random, independent volatility.  In each security, the volatility is set to a very high value, where it consistently dominates over the cyclicality, causing the strategy to underperform:

When we take the different securities and build a combined index out of them, the volatility unique to each individual security drop outs, but the cyclicality that the securities have in common–their tendency to rise and fall every 7 years in accordance with our simplified sinusoidal model of the business cycle–remains.  The strategy is therefore able to outperform in the combined index.

The following chart shows that outperformance.  The black line is an equal-weighted index built out of the six different high-volatility securities shown above:


As you can see, the volatility in the resultant index ends up being minimal.  The 7 year sinusoidal cyclicality, however, is preserved in fullness.  The strategy therefore performs fantastically in the index, even as it performs terribly in the individual constituents out of which the index has been built.  QED.

To summarize:

  • The moving average strategy’s performance is driven by the balance between cyclicality and volatility. When cyclicality is the greater presence, the strategy captures large, profitable downturns and outperforms.  When volatility is the greater presence, the strategy gets caught in recurrent whipsaws and underperforms.
  • The strategy underperforms in individual securities because the volatility in those securities is too high relative to the cyclicality.  The securities “chop” around too much, creating too many whipsaws.  When they do experience the kinds of sustained downturns that the strategy might profit from, the downturns end up not being deep enough to offset the plentitude of whipsaws.
  • When different individual securities are combined into an index, the movements that the securities have in common carry through to the index.  The movements that they do not have in common get averaged down and diluted away in the index.
  • There’s a cyclicality common to all risky securities–the cyclicality of the business cycle, which tends to inflate and depress valuations of the entire market in unison.  When an aggregate index is built out of individual securities, that cyclicality carries through into the final index result.  But, crucially, the random price deviations that constitute short-term volatility, deviations that the securities do not have in common with each other, do not carry through.  Rather, they get averaged down and diluted away in the index.
  • As a result, the balance between cyclicality and volatility in the index, in contrast to the individual securities, proves to be favorable to the strategy, allowing it to outperform.

An Improved Trend-Following Strategy: Growth-Trend Timing

In what follows, I’m going to introduce the modification to the strategy that most readers are probably here to see: Growth-Trend Timing (GTT).  As a warning, GTT is hardly an investment panacea.  Like any other strategy, it carries a set of risks and vulnerabilities. That said, it’s total risk-reward proposition, in my view, is signficantly more attractive than the the risk-reward propositions of the other systematic long-horizon strategies that have been proposed.

What I find particularly compelling about the strategy is that it makes sense.  It tries to systematically do what any good human trend-following trader has to do to time the market well–distinguish between those downward price breaks that are real, that are going to be met with sustained follow-throughs to lower prices, and those downward price breaks that are fake-outs, that are only going to produce whipsaw losses.  Granted, there may be more efficient ways for us to make that distinction than to use GTT–maybe the best way is for us to use our market intuitions directly, in real-time, abandoning systematic approaches altogether.  But GTT, in my opinion, represents a great place for an investor to start.

Recall the equation for the moving average strategy’s total return performance, an equation that also describes the total return performance of the momentum strategy. Looking at that equation, it’s clear that for the strategy to outperform on total return, the cumulative gains from captured downturns have to exceed the cumulative losses from whipsaws:

(8) Strategy(t) = Index(t) + Cumulative Gains from Captured Downturns(t) – Cumulative Losses from Whipsaws(t)

We said in the previous piece that we wanted market timing strategies to have a sound analytic basis. So, let’s ask: What is the sound analytic basis for believing that if we time the market each month using the 10 month moving average, as Mebane Faber recommends, or using a monthly 12 month momentum signal, as others have recommended, that our gains from captured downturns will exceed, or at least minimally keep up with, our losses from whipsaws?  Is there any a priori basis at all for that bellief?

Simply pointing to the empirical result itself is not enough, for without an explanation for the result, an ability to understand why it is achieved, we have no way to estimate the probability that it will persist into future data.  We have to just cross our fingers and hope the patterns of the past persist into the future.  But that might not happen, especially if the pattern has been extracted from a very small sample size (which, in this case, it has been).

The most common explanation given for why the strategy outperforms involves an appeal to behavior.  Here’s my preferred way of framing that explanation.  For maximal clarity, consider a simplified flow-based model of security pricing:


The model delineates two categories of market participants: market takers and market makers.  The market takers want to change the asset compositions of their portfolios, so they place market orders to buy and sell securities.  The market makers do not want to change the asset composition of their portfolios, but instead simply want to collect the spread between the prices that the market takers buy and sell at, remaining positionally neutral themselves.  So the market makers lay out simultaneous bids and asks at price midpoints that they think will lead to matching buying and selling order flows.  They continually change these price midpoints in order to ensure that the incoming order flows match.  They are financially incented to execute this task correctly, for if they execute it incorrectly–if they put their price midpoints in the wrong places–then they will buy more than they sell, or sell more than they buy, building up inventory or debt that they themselves will have to liquidate at a loss.

Ultimately, that’s where “price” in the modern market comes from.  Market makers, usually computers, try to find the general price midpoints that will lead to matching market taker order flow.  They lay out bids and asks at those midpoints, and collect the spread between them as that flow comes in.  Price changes, then, are the result of order flow imbalances.  If order flow proves to be imbalanced at some price, market makers will have to change the price–the midpoints of the bids and asks that they are laying out–to get the flows to balance again.  If they don’t do that, they will build up inventory or debt that they will have to liquidate into the market later, losing money themselves.

Now, suppose that there’s a negative fundamental trend taking place somewhere in the market.  Maybe a “recession” is beginning–an economic downturn that will generate falling revenues and earnings, tightening credit conditions, rising strains in the financial system, deteriorating social mood and investor sentiment, and ultimately, lower prices and valuations.  Surely, if investors become aware of that development, they will sell in advance of it.

But, crucially, they are not going to all become aware of it at the same time.  At first, only a small number of investors will see it.  So they will stop buying and start selling.  Their actions will redirect a portion of the previously existing buying flow–say, 0.1% of it–into selling flow.  A flow imbalance–consisting of too much selling flow, not enough buying flow–will then result.  In response to the imbalance, the market makers will lower the price.  In response, other people who aren’t yet aware of what’s coming, and who are psychologically anchored to higher prices, will see “value” in the lower prices, and will increase their buying and reduce their selling, restoring the balance. This stabilizing feedback response will attenuate the drop and keep prices from falling too quickly or too far.  Instead of plunging, prices will simply “act heavy.”

fearAs the deteriorating fundamental picture increasingly reveals itself, caution will increase, and more order flow will shift from buying to selling.  In response to the shift, market makers will reduce the price further.  But there will still be those that don’t see or appreciate what’s happening.  They will step in and buy the perceived bargains that are on their hands.

Of course, that can only go on for so long.  As the price continues to behave poorly, more and more previously constructive investors will become conditioned–in the Skinnerian sense–into a worried, risk-averse mindset.  More and more buyers will feel compelled to tap out and sell.  To keep the order flow balanced, market makers will have to continue to reduce their price midpoints.  Eventually, a positive feedback loop of fear will take hold, and the market’s fragile equilibrium will snap.  The market will take a strong plunge lower, to a level that offers genuine bargains that fit with the deteriorating fundamental outlook.

Crucially, before the eventual plunge takes place, three factors will prevent the system from unraveling: (1) uncertainty, kept in place by the limited in-trickle of information, (2) the reward conditioning of the market gains that investors will have experienced in the cycle up to that point, which will encourage an initially constructive mentality towards the downward price trend (“it’s a buying opportunity!”), and (3) the stabilizing feedback response of anchoring, which causes people to see “opportunity” rather than “danger” in lower prices, and to therefore provide buying support to markets that would otherwise fall more quickly.

Before the plunge occurs, then, a slower, more gentle, more benign negative price trend will take hold.  The market’s momentum, captured in its trailing annual return, will go negative.  Prices will fall below key long-term moving averages.  Internals will deteriorate. Crucially, market timing strategies that pivot off of these signals will then successfully get out, avoiding the bulk of the coming downturn.

On the eventual recovery, the process will happen in reverse.  Crucially, the full upturn will not take place instantaneously.  Rather, it will show early clues and signs.  The market’s momentum will go positive.  Prices will rise above key long-term moving averages. Internals will improve.  Market timing strategies that pivot off of these signals will then get back in, usually at meaningfully lower prices than they got out at.  The result will be outperformance over the index.

As we saw earlier, the only condition that needs to be met for the strategies to successfully get out and get back in at lower prices is that their periods–i.e., the lengths of their momentum horizons and moving averages–be substantially shorter than the period of the oscillation being timed.  If the oscillation being timed is the business cycle, then the period simply has to be less than a few years, which is how long it takes for expansions to morph into recessions and back into expansions.  A 12 month momentum period, and a 10 month moving average period, obviously qualify–they are significantly less than the typical average 7 year (84 month) business cycle length.

It would seem, then, that we’ve identified the reason why the momentum and moving average strategies outperform.  The recommended periods–12 months for the momentum strategy, and 10 months from the moving average strategy–have been set to values that are short enough to allow the strategies to successfully capture the downturns associated with the business cycle, given the amount of time that it takes for those downturns to occur and reverse into recoveries, an amount of time on the order of years rather than months.

But wait.  We haven’t said anything about whipsaws.  Per the equation, it’s not enough for the strategy to simply extract gains from downturns.  The gains have to exceed the losses that will inevitably incurred in whipsaws.  What reason do we have to believe that the profits that the strategy will capture from downturns using a 12 month momentum period, or a 10 month moving average period, will consistently exceed the cumulative effect of the many whipsaws that will be suffered on those short periods?  That is where the explanation is lacking.  

Every market participant knows what the business cycle is, and is aware of the types of large downward price movements that it can produce.  Every active market participant is eager to “time” it, where possible.  That may not have been the case in the year 1918, but it’s definitely the case in the year 2016.  Market participants in the year 2016 are extremely sensitive to the market’s history and well-defined cyclical tendencies. That sensitivity, if it leads to volatile short-term trading around the feared prospect of turns in the cycle, has the potential to create substantial whipsaw losses for the strategy.

Frequently, market participants will sense impending downturns and sell, pushing price into a negative trend, when there’s no fundamental process to carry the downturn through. The negative price trend will then reverse itself, inflicting gap losses on the strategy. That’s exactly what has happened in the four documented whipsaws that the moving average strategy has suffered in U.S. equities since the 2009 low:


In the late summer of 2010, as investors collectively became scared of a “double-dip” scenario, the price trend went negative.  But the move petered out and reversed itself, because it wasn’t supported by the fundamental picture, which turned out to be much more optimistic.  The same thing happened again in the fall of 2011, when additional fears related to the potential impact and calamity of a dissolution and financial meltdown in the Eurozone took hold.  Those fears turned out to be unfounded, and the market quickly recovered its losses, inflicting whipsaws on whoever tried to exit.  The final whipsaw, of course, occurred just a few months ago, in the August market scare.  In total, the four whipsaws have imposed a cumulative 25% loss on the strategy relative to buy and hold–a very big hit.  Will profitable trades in the coming downturn–whenever it comes–make up for that loss?  We don’t know.

As I write this piece, the market’s price trend is well below key momentum and moving average boundaries.  But it wasn’t below those boundaries at the end of last month, so the strategy hasn’t yet exited.  If things stay where they are, the strategy will sell at the end of the current month at very low prices relative to where the moving average crossover actually happened, taking on a large gap loss.  The likelihood of another whipsaw for the strategy is therefore high, particularly if current fears about the energy sector and China move to the backburner.

What remains lacking, then, is a clear analytic explanation for why, on a going-forward basis, we should expect captured downturns to exceed whipsaws in the strategy.  In the last several years, the strategy was lucky to get a big downturn that it was able to exploit–the 2008 Great Recession.  Before that, it was gifted with big drops associated with the recessions of 1974, 1970, 1937, and 1929.  Beyond those drops–which collectively amount to a sample size of only 5–every other transaction has either been a negligible gain, or a whipsaw loss, of which there have been a huge number (see the red boxes below):


My own gut sense is that the kind of deep, profitable downturns that we saw in 2008, 1974, 1937 and 1929 will not happen again for a long time–on the order of decades.  The consequence of secular stagnation–the reality of which has become an almost indisputable economic fact–is that you get weaker expansions, and also weaker downturns–weaker cyclicality in general, which is exactly what we’ve seen in the current cycle.  To a trend-following strategy, that’s the equivalent of poison.  It attenuates the sources of gains–large downturns that get captured for profit–without attenuating the sources of losses: choppy volatility that produces whipsaws.

We therefore need to find a way to improve the strategy.  The best place to start is to examine the above table and take note of the clear fact that the strategy outperforms by successfully timing the business cycle–recessions.  That’s it’s primary strategy–recession-timing.  When the strategy switches inside of recessions, it tends to be profitable.  When it switches outside of recessions, it tends to be unprofitable.

The following two charts make the point more clear.  The first chart shows the strategy’s cumulative performance relative to buy and hold inside recession, the second, outside recession:

mmain mmaout3

Inside recession, the strategy produces a cumulative return equal to 3.5 times the return of buy and hold. Outside recession, the strategy produces a cumulative return equal to 0.4 times the return of buy and hold.  That’s a cumulative performance difference of almost 10X.

A natural way to improve the strategy, then, is to try to teach it to differentiate between situations where the fundamental backdrop makes recession likely, and situations where the fundamental backdrop makes recession unlikely.   If recession is likely, then the negative price trend is likely to be met with sustained follow through, resulting in profitable captured downturns.  But if recession is unlikely, as it was in the summer of 2010, the fall of 2011, the fall of 2015, and possibly now, then the negative price trend is likely to peter out and reverse itself, inflicting whipsaw losses on the strategy.  If the strategy can distinguish between the two, then it can turn itself off in the latter case, where recession is unlikely, so that it avoids the whipsaw losses.

That’s exactly what Growth-Trend Timing does.   It takes various combinations of high quality monthly coincident recession signals, and directs the moving average strategy to turn itself off during periods when those signals are unanimously disconfirming recession, i.e., periods where they are all confirming a positive fundamental economic backdrop.

The available monthly signals are:

  • Real Retail Sales Growth (yoy)
  • Industrial Production Growth (yoy)
  • Real S&P 500 EPS Growth (yoy), modeled on a total return basis.
  • Employment Growth (yoy)
  • Real Personal Income Growth (yoy)
  • Housing Start Growth (yoy)

The precise timing criterion for GTT is as follows.  Take a reliable monthly growth signal, or better, a collection of reliable monthly growth signals that overlap well to describe the total state of the economy:

  • If, at the close of the month, the growth signals for the prior month are unanimously positive, then go long or stay long for the next month, and ignore the next step.
  • If, at the close of the month, the growth signals for the prior month are not unanimously positive, then if price is above the 10 month moving average, then go long or stay long for the next month.  If price is below the 10 month moving average, sell or stay out for the next month.

Importantly, in backtesting, the growth signals need to coincide with the actual time in which the data used to produce them becomes available.  When monthly economic numbers are published, they’re usually published for the prior month.  So, in any backtest, the strategy needs to trade off of the previous month’s economic numbers, not the current month’s numbers, which are unavailable.

Empirically, for the U.S. economy, the strongest historical recession indicator is Real Retail Sales Growth.  Since the Census Bureau began to collect and publish it as a series in the late 1940s, it has expeditiously nailed every recession that has occurred, giving only a few false positives.  Notice how the blue line consistently crosses below zero at the left sides of the gray columns, the beginnings of the recessions. (Link: FRED)


Real Retail Sales Growth is a truly fantastic indictor, a direct line into the health of the U.S. consumer, the engine of the U.S. economy.  We’re therefore going to use it as a fundamental, preferred signal for GTT.

The following chart shows GTT’s performance on the monthly S&P 500 using real retail sales growth as a single growth signal from January 1947 to November 2015:


The purple bars at the bottom show periods where real retail sales growth is declaring “no recession.”  In those periods, the strategy turns itself off.  It stops timing altogether, and simply stays long, to avoid unnecessary whipsaws.  In the places where there are no purple bars, real retail sales growth is negative, suggesting that recession is likely. The strategy therefore puts its timing protection back on, switching into and out of the market based on the usual moving average criterion.  Notice that the purple bars overlap quite well with the grey columns, the recessions.  The close overlap confirms the power of real retail sales growth as a coincident recession indicator.

As you can see in the chart, GTT (blue line) outperforms everything: buy and hold (gray line), the X/Y portfolio (black line), and the 10 month moving average strategy (abbreviated “MMA”, and shown in the green line).  The dotted red line shows the cumulative outperformance of GTT over MMA.  As expected, GTT continually ratchets higher over the period.  The driver of its consistent uptrend is its avoidance of the useless whipsaws that MMA repeatedly entangles itself in.

Another accurate recession indicator is Industrial Production Growth.  It’s available to a period much farther back in time–specifically, the year 1919.  With the exception of a single costly omission in the 1974 recession, it’s done a very good job of accurately calling out recessions as they’ve happened. (Link: FRED)


The following chart shows GTT’s performance on the monthly S&P 500 using industrial production growth as a single growth signal from February 1928 to November 2015:


Again, GTT strongly outperforms both buy and hold and MMA.  However, the outperformance isn’t as strong as it was when real retail sales growth was used, primarily because the industrial production signal misses the 1974 recession, and also because the strategy is late to exit in the 1937 recession.

Real retail sales growth and industrial production growth represent two diverse, independently reliable indicators of the health of the two fundamental segments of the overall economy: consumption and production.  The best result comes when they are put together, in combination.  The problem, of course, is that retail sales data isn’t available prior to 1947, so we can’t test the two signals together back to the beginning of each of their time series.  Fortunately, there’s a fix to that problem.  To get data on real retail sales before 1947, we can use real department store sales and real shoe sales as a proxy. Both were published by the government on a monthly basis back to the late 1910s. (Link: FRED)


Using that proxy, and combining the two signals together, we observe the following performance for GTT  back to the inception of the data.  Note that a recessionary indication from either metric turns the strategy’s timing function on.  Otherwise, the timing function is off, and the strategy stays long:


The above construction of GTT–using the dual signals of real retail sales growth and industrial production growth–is the preferred “recession-timing” construction shown in the beginning of the piece.  As you can see, the strategy on that construction consistently outperforms everything, by a very large margin.  It’s only weakness is its failure to expeditiously exit prior to the 1937 recession, a recession that was almost impossible to predict using data.

The following table shows GTT’s entries and exits:


Relative to MMA, the win rate improves from roughly 24% to roughly 39%.  That improvement drives the bulk of the improvement in the strategy’s overall performance.  As intended, the strategy successfully captures all of the downturns that MMA captures, but without all of the whipsaws.

The following chart shows the timing power of the retail sales and industrial production growth signals taken individually, outside of a larger trend-following framework.  Real retail sales growth timing is shown in red, and industrial production growth timing is shown in yellow.  When growth is positive, the strategies go long, when negative, they go to cash:


Evidently, the individual timing performance of the growth signals is quite weak.  The main reason for the weakness is that the signals are overly sensitive to recession risk, particularly on the back end.  They stay negative after the recession is over, and are therefore late to re-enter the market on upturns, at a significant cost to performance. Their lateness doesn’t hurt the result for GTT, however, because they are being used as mere overlays for a more price-cognizant trend-following approach.  GTT respects price and re-enters the market whenever the trend goes positive, no matter what the growth signals happen to be saying.

An additional reason for the weak timing performance of the growth signals taken in isolation is that they have noise in them–they sometimes go falsely negative outside of recession.  When the signals are used to time the market in isolation, outside of a trend-following approach, the noise leads to whipsaws.  In GTT, however, the noise is modulated by the trend-following criterion–a growth signal might go negative, but nothing will happen unless the market also happens to be in a downtrend.  What are the odds that a negative growth signal and a price downtrend will both occur by coincidence, when everything else in the economy is just fine?  Very low, which is the reason that GTT is able to so efficient in its timing.

A third coincident recession indicator is corporate earnings growth.  The best way to model that indicator is to use real S&P 500 Total Return EPS, which corrects for both inflation and for changes in dividend payout ratios.  The following chart shows GTT’s performance on the single indicator of Total Return EPS growth from February 1928 to November 2015:


The strategy again outperforms everything, but not by as large of a margin.  It underperforms other iterations of the strategy in that it misses both the 1937 recession and the 1974 recession.

Another recession indicator worth examining is employment growth.  Economists normally categorize employment growth as a lagging indicator, but lagging indicators can work reasonably well for GTT.  To improve the recessionary accuracy of the indicator, we adjust it for the size of the labor force. (Link: FRED)


The following chart shows GTT’s performance using employment growth as a single growth indicator from February of 1949 to November of 2015:


Once again, the strategy outperforms everything.  The red line, which shows GTTs cumulative outperformance over MMA, consistently marches upward over the period.

The final two coincident recession indicators worth looking at are housing start growth and real personal income growth.  Housing start growth is a leading indicator, and is somewhat erratic, so we combine it with the more mellow metric of real personal income growth. We also adjust it for the size of the labor force. The recessionary breakpoints are 3% for real personal income growth, and -10% for housing start growth. (Link: FRED)


The following chart shows GTT’s performance using the two metrics in a combined growth signal from January 1960 to November 2015.  If either is negative, the strategy turns its timing function on. Otherwise, it stays invested:


Once again, the strategy outperforms everything.

Now, the reason that many investors prefer to use a monthly checking period when executing trend-following strategies is that daily checking periods produce too many unnecessary trades and therefore too many gap and bid-ask spread losses.  But Growth-Trend Timing dramatically reduces the number of unnecessary trades, filtering them out with the growth signal.  It can therefore afford to check prices and transact on a daily basis.

The following charts shows GTT’s performance from February 1st, 1928 to July 31st, 2015 using a 200 day moving average timing criterion alongside our preferred GTT signal combination of real retail sales growth and industrial production growth:



Once again, the strategy does very well, consistently outperforming everything.  The strategy’s outperformance over the simple 200 day moving average strategy is 120 bps per year–not as large as in the monthly version, but only because the moving average strategy performs more poorly in the monthly version.  My suspicion, which I cannot prove, is that the simple 200 day moving average strategy is somehow benefiting from residuals associated with the daily momentum phenomenon evaluated in a prior piece. That phenomenon began to disappear from the market in the 1970s, which, not coincidentally, is when the 200 day moving average strategy began to consistently underperform.

What’s especially important, in all of these charts, is the way in which the strategy has shown recent strength.  MMA and other momentum-focused strategies have tended to deteriorate in efficacy over time, especially on a daily horizon.  GTT, in contrast, has not weakened it all–it’s only gotten stronger.  That’s crucial, since a strategy that works in recent data is more likely to be pivoting off of causalities in the system that are still there to be exploited.

In looking at these charts, the reader may get the impression that it’s easy to improve a trend-following strategy by adding appendages after the fact.  But it’s not so easy.  To illustrate, here’s an appendage that we might expect to work, but that clearly doesn’t work: Shiller CAPE valuation.  The following chart shows the performance of Value-Trend Timing (VTT) using the Shiller CAPE from January of 1947 to July of 2015.  Growth-Trend Timing using retail sales and industrial production is shown for comparison:


VTT functions in the exact same manner as GTT, except that instead of using a growth signal, it uses a valuation signal.  If, at a given point in time, the market’s valuation as measured by the Shiller CAPE is “cheap“, i.e., cheaper than the historical average valuation up to that point, VTT locks in the bargain and goes long.  If the market’s valuation is expensive, i.e., more expensive than the historical average valuation up to that point, then VTT times the market using the 10 month moving average strategy.

As you can see, VTT adds literally nothing to the performance of MMA.  That’s because the market’s valuation relative to its historical average does not represent a reliable timing signal.  The economy’s likelihood of being in recession, in contrast, does represent such a signal.  If you want to successfully time the market, what you need to be is a strong macroeconomist, not a valuation expert.

Readers probably want to know what GTT is saying about the market right now.  The answer is predictably mixed.  Real retail sales growth, employment growth, real personal income growth, and housing start growth are all healthily positive, reflecting strength in the domestic U.S. household sector.  If you choose to build GTT on those signals, then you will be long right now, even though the market’s current price trend is negative.  Of course, weakness in the energy sector, China, and the global economy more generally win out in the current tug of war with domestic U.S. strength, then the strategy, in failing to sell here, or in failing to have sold at higher levels, is going to take deeper losses.

At the same time, real Total Return EPS growth, industrial production growth, and production proxies that might be used in place of industrial production growth (e.g., ISM readings), are flashing warning signals, consistent with known stresses in the energy sector and in the global economy more generally (especially emerging markets), which those signals are more closely tied to.  If you choose to build GTT using those signals individually or in combination with others, as I chose to do at the beginning of the piece, then given the market’s current negative trend, you will be in treasury bills right now–especially if you are using the daily version of the strategy, which is advisable, since the daily version shrinks the strategy’s gap losses at essentially no cost.  Bear in mind that if strength in the domestic economy wins out in the current tug of war, then the strategy on this construction is likely to get whipsawed.

As with any market timing strategy, GTT carries risks.  There’s a sense in which it’s been data-mined–shaped to conform to the observed historical patterns that have made recession timing profitable.  Of course, the construction of any timing strategy will involve an element of data-mining, if the strategy is going to get the calls right.  The designer will have to look back at the past and see how the system worked, modeling the strategy to exploit the recurrent patterns that it produced.  But still, those patterns may have been coincidental, and therefore the success of the approach may not persist reliably into the future.

Equally importantly, in the past, investors were not as attuned to the predictive power of the economic readings that the strategy uses.  The market’s increased general awareness of the predictive power of those readings may hinder the performance of the strategy going forward.  To give an example of how the hindrance might happen, volatility could increase around the public release of the readings, as everyone focuses on them.  The volatility would then inflict whipsaw losses on the strategy as it tries to trade on the readings.

What GTT has in its favor, in contrast to other popular trend-following approaches, is that it is highly efficient in its trading and extremely long-biased in its market disposition (spending roughly 87% of the time invested).  Even if, by chance, the future turns out to be substantially different from the past, such that the strategy gets certain timing calls wrong, investors in the strategy are unlikely to spend large amounts of time camped outside of the market.  That’s the biggest risk associated with market timing–that the investor will never find a way to get back in, and will therefore miss out on a lifetime’s worth of market earnings.  By timing the market on a set of signals that flash negative warnings only very rarely, GTT mitigates that risk.

Most importantly, GTT makes sense analytically.  It systematically does what any human trend-following market timer would have to do in order to be successful–distinguish between negative price trends that will give way to large downturns that support profitable exits and reentries, and negative price trends that will prove to be nothing more than short-term noise, head-fakes that quickly reverse, inflicting whipsaw losses on whoever tries to trade them.  My reason for introducing the strategy is not so much to tout its efficacy, but to articulate that task as the primary task of trend-following, a task that every trend-follower should be laser-focused on, in the context of the current negative trend, and all future ones.

Posted in Uncategorized | Comments Off on Growth and Trend: A Simple, Powerful Technique for Timing the Stock Market

Trend Following In Financial Markets: A Comprehensive Backtest

bill griffeth

Bill Griffeth interviews Paul Tudor Jones, Black Monday, October 19th, 1987.

“My metric for everything I look at is the 200-day moving average of closing prices.  I’ve seen too many things go to zero, stocks and commodities.  The whole trick in investing is: ‘How do I keep from losing everything?’  If you use the 200-day moving average rule, then you get out.  You play defense, and you get out.” — Paul Tudor Jones, as interviewed by Tony Robbins in Money: Master the Game.


Everyone agrees that it’s appropriate to divide the space of a portfolio between different asset classes–to put, for example, 60% of a portfolio’s space into equities, and 40% of its space into fixed income.  “Market Timing” does the same thing, except with time.  It divides the time of a portfolio between different asset classes, in an effort to take advantage of the times in which those asset classes tend to produce the highest returns.

What’s so controversial about the idea of splitting the time of a portfolio between different asset classes, as we might do with a portfolio’s space?  Why do the respected experts on investing almost unanimously discourage it?

  • The reason can’t be transaction costs.  Those costs have been whittled down to almost nothing over the years.
  • The reason can’t be negative tax consequences.  An investor can largely avoid those consequences through the use of futures contracts.  Suppose, for example, that an investor owns shares of an S&P 500 ETF as a core long-term position in a taxable account, and wants to temporarily go to cash in advance of some expected period of market turbulence.  To do that, she need not sell the shares themselves.  Instead, she can sell an S&P 500 futures contract in an amount equal to the size of the ETF position. The sale will perfectly offset her exposure to the S&P 500, bringing it down to exactly zero, without triggering a taxable capital gain.  When she wants to re-enter the market, she can simply buy back the futures contract, removing the hedge.  The only negative tax implication is that during the period in which she holds the hedge, her position will count as a section 1092 “straddle”, and any qualified dividends that she receives will be taxed as ordinary income.  But that’s a very small impact, especially if the hedged period is brief.
  • The reason can’t be poor timing.  For if markets are efficient, as opponents of market timing argue, then it shouldn’t possible for an individual to time the market “poorly.”  As a rule, any choice of when to exit the market should be just as good as any other.  If a person were able to consistently defy that rule, then reliably beating the market would be as simple as building a strategy to do the exact opposite of what that person does.

In my view, the reason that market timing is so heavily discouraged is two-fold:

(1) Market timing requires big choices, and big choices create big stress, especially when so much is at stake.  Large amounts of stress usually lead to poor outcomes, not only in investing, but in everything.

(2) The most vocal practitioners of market timing tend to perform poorly as investors.

Looking at (2) specifically, why do the most vocal practitioners of market timing tend to perform poorly as investors? The answer is not that they are poor market timers per se. Rather, the answer is that they tend to always be underinvested.  By nature, they’re usually more risk-averse to begin with, which is what sends them down the path of identifying problems in the market and stepping aside.  Once they do step aside, they find it difficult to get back in, especially when the market has gone substantially against them.  It’s painful to sell something and then buy it back at a higher price, locking in a loss.  It’s even more difficult to admit that the loss was the result of one’s being wrong.  And so instead of doing that, the practitioners entrench.  They come up with reasons to stay on the course they’re on–a course that ends up producing a highly unattractive investment outcome.

To return to our space-time analogy, if an investor were to allocate 5% of the space of her portfolio to equities, and 95% to cash, her long-term performance would end up being awful.  The reason would be clear–she isn’t taking risk, and if you don’t take risk, you don’t make money.  But notice that we wouldn’t use her underperformance to discredit the concept of “diversification” itself, the idea that dividing the space of a portfolio between different asset classes might improve the quality of returns.  We wouldn’t say that people that allocate 60/40 or 80/20 are doing things wrong.  They’re fine.  The problem is not in the concept of what they’re doing, but in her specific implementation of it.

Well, the same point extends to market timing.  If a vocal practitioner of market timing ends up spending 5% of his time in equities, and 95% in cash, because he got out of the market and never managed to get back in, we shouldn’t use his predictably awful performance to discredit the concept of “market timing” itself, the idea that dividing a portfolio’s time between different asset classes might improve returns.  We shouldn’t conclude that investors that run market timing strategies that stay invested most of the time are doing things wrong.  The problem is not in the concept of what they’re doing, but in his specific implementation of it.

In my view, the practice of market timing, when done correctly, can add meaningful value to an investment process, especially in an expensive market environment like our own, where the projected future returns to a diversified buy and hold strategy are so low.  The question is, what’s the correct way to do market timing?  That’s the question that I’m going to try to tackle in the next few pieces.

In the current piece, I’m going to conduct a comprehensive backtest of three popular trend-following market timing strategies: the moving average strategy, the moving average crossover strategy, and the momentum strategy.  These are simple, binary market timing strategies that go long or that go to cash at the close of each month based on the market’s position relative to trend.  They produce very similar results, so after reviewing their performances in U.S. data, I’m going to settle on the moving average strategy as a representative strategy to backtest out-of-sample.

The backtest will cover roughly 235 different equity, fixed income, currency, and commodity indices, and roughly 120 different individual large company stocks (e.g., Apple, Berkshire Hathaway, Exxon Mobil, Procter and Gamble, and so on).  For each backtest, I’m going to present the results in a chart and an entry-exit table, of the type shown below (10 Month Moving Average Strategy, Aggregate Stock Market Index of Sweden, January 1920 – July 2015):


(For a precise definition of each term in the chart and table, click here.)

The purpose of the entry-exit tables is to put a “magnifying glass” on the strategy, to give a close-up view of what’s happening underneath the surface, at the level of each individual trade.  In examining investment strategies at that level, we gain a deeper, more complete understanding of how they work.  In addition to being gratifying in itself, such an understanding can help us more effectively implement the concepts behind the strategies.

I’ve written code that allows me to generate the charts and tables very quickly, so if readers would like to see how the strategy performs in a specific country or security that wasn’t included in the test, I encourage them to send in the data.  All that’s needed is a total return index or a price index with dividend amounts and payment dates.

The backtest will reveal an unexpected result: that the strategy works very well on aggregate indices–e.g., the S&P 500, the FTSE, the Nikkei, etc.–but works very poorly on individual securities.  For perspective on the divergence, consider the following chart and table of the strategy’s performance (blue line) in the S&P 500 from February of 1963 to July of 2015:

spx1962 spx1962a

As you can see, the strategy performs very well, exceeding the return of buy and hold by over 60 bps per year, with lower volatility and roughly half the maximum drawdown. Compare that performance with the strategy’s performance in the six largest S&P 500 stocks that have trading histories back to 1963.  Ordered by market cap, they are: General Electric $GE, Walt Disney $DIS, Coca-Cola $KO, International Business Machines $IBM, Dupont $DD and Caterpillar $CAT.

(Note: click on any image, and a high-resolution slideshow of all the images will appear)

(For a precise definition of each term in the charts and tables, click here.)

As you can see in the charts, the strategy substantially underperforms buy and hold in every stock except $GE.  The pattern is not limited to these 6 cases–it extends out to the vast majority of stocks in the S&P 500.  The strategy performs poorly in almost all of them, despite performing very well in the index.

The fact that the strategy performs poorly in individual securities is a significant problem, as it represents a failed out-of-sample test that should not occur if popular explanations for the strategy’s efficacy are correct.  The most common explanations found in the academic literature involve appeals to the behavioral phenomena of underreaction and overreaction.  Investors allegedly underreact to new information as it’s introduced, and then overreact to it after it’s been in the system for an extended period of time, creating price patterns that trend-following strategies can exploit.  But if the phenomena of underreaction and overreaction explain the strategy’s success, then why isn’t the strategy successful in individual securities?  Individual securities see plenty of underreaction and overreaction as new information about them is released and as their earnings cycles progress.

There’s a reason why the strategy fails in individual securities, and it’s quite fascinating. In the next piece, I’m going to try to explain it.  I’m also going to try to use it to build a substantially improved version of the strategy.  For now, my focus will simply be on presenting the results of the backtest, so that readers can come to their own conclusions.

Market Timing: Definitions and Restrictions

We begin with the following definitions:

Risk Asset: An asset that exhibits meaningful price volatility.  Examples include: equities, real estate, collectibles, foreign currencies expressed in one’s own currency, and long-term government bonds.  Note that I insert this last example intentionally. Long-term government bonds exhibit substantial price volatility, and are therefore risk assets, at least on the current definition.  

Safe Asset: An asset that does not exhibit meaningful price volatility.  There is only one truly safe asset: “cash” in base currency.  For a U.S. investor, that would include: paper dollars and Fed balances (base money), demand and time deposits at FDIC insured banks, and short-term treasury bills.

Market Timing Strategy: An investment strategy that seeks to add to the performance of a Risk Asset by switching between exposure to that asset and exposure to a Safe Asset.

The market timing strategies that we’re going to examine in the current study will be strictly binary.  At any given time, they will either be entirely invested in a single risk asset, or entirely invested in a single safe asset, with both specified beforehand.  There will be no degrees of exposure–only 100% or 0%.

From a testing perspective, the advantage of a binary strategy is that the ultimate sources of the strategy’s performance–the individual switching events–can be analyzed directly. When a strategy alters its exposure in degrees, such an analysis becomes substantially more difficult–every month in the series becomes a tiny “switch.”

The disadvantage of a binary strategy is that the strategy’s performance will sometimes end up hinging on the outcome of a small number of very impactful switches.  In such cases, it will be easier for the strategy to succeed on luck alone–the luck of happening to switch at just the right time, just as the big “crash” event is starting to begin, when the switch itself was not an expression of any kind of reliable skill at avoiding the event.

To be clear, this risk also exists in “degreed” market timing strategies, albeit to a reduced degree.  Their outperformance will sometimes result from rapid moves that they make in the early stages of market downturns, when the portfolio moves can just as easily be explained by luck as by genuine skill at avoiding downturns.

In our case, we’re going to manage the risk in two ways: (1) by conducting out of sample testing on a very large, diverse quantity of independent data sets, reducing the odds that consistent outperformance could have been the result of luck, and (2) by conducting tweak tests on the strategy–manipulations of obviously irrelevant details in the strategy, to ensure that the details are not driving the results.

In additiong to being binary, the market timing strategies that we’re going to test will only switch into cash as the safe asset.  They will not switch into other proxies for safety, such as long-term government bonds or gold.  When a strategy switches from one volatile asset (such as equities) into another volatile asset that is standing in as the safe asset (e.g., long-term government bonds or gold), the dominant source of the strategy’s performance ends up being obscured by the possibility that a favorable or unfavorable timing of either or both assets, or alternatively, a strong independent performance from the stand-in safe asset, could be the source. Both assets, after all, are fluctuating in price.

An example will help illustrate the point.  Suppose that we’re examining a market timing strategy that is either all stocks or all cash.  If the strategy outperforms, we will know that it is outperforming by favorably timing the price movements of stocks and stocks alone. There’s nothing else for it to favorably time.  It cannot favorably time the price movements of cash, for example, because cash is a safe asset whose “price” exhibits no movement. Suppose, alternatively, that we’re examining a strategy that switches from stocks into bonds.  If the strategy outperforms, we will not know whether it outperformed by favorably timing stocks, by favorably timing bonds, or by favorably timing both.  Similarly, we won’t know how much of its strength was a lucky byproduct of strength in bonds as an asset class.  We therefore won’t know how much was a direct consequence of skill in the timing of stocks, which is what we’re trying to measure.

Now, to be clear, adding enhancements to a market timing strategy–degreed exposure, a higher-returning uncorrelated safe asset to switch into (e.g., long-term government bonds), leverage, and so on–can certainly improve performance.  But the time to add them is after we’ve successfully tested the core performance a strategy, after we’ve confirmed that the strategy exhibits timing skill.  If we add them before we’ve successfully tested the core performance of the strategy, before we’ve verified that the strategy exhibits timing skill, the risk is that we’re going to introduce noise into the test that will undermine the sensitivity and specificity of the result.

Optimizing the Risk-Reward Proposition of Market Timing

principlesThe right way to think about market timing is in terms of risk and reward.  Any given market timing strategy will carry a certain risk of bad outcomes, and a certain potential for good outcomes.  We can increase the probability of seeing good outcomes, and reduce the probability of seeing bad outcomes, by seeking out strategies that manifest the following five qualities: analytic, generic, efficient, long-biased, and recently-successful.

I will explain each in turn:

Analytic:  We want market timing strategies that have a sound analytic basis, whose efficacy can be shown to follow from an analysis of known facts or reasonable assumptions about a system, an analysis that we ourselves understand.  These properties are beneficial for the following reasons:

(1) When a strategy with a sound analytic basis succeeds in testing, the success is more likely to have resulted from the capturing of real, recurrent processes in the data, as opposed to the exploitation coincidences that the designer has stumbled upon through trial-and-error. Strategies that succeed by exploiting coincidences will inevitably fail in real world applications, when the coincidences get shuffled around.

(2) When we understand the analytic basis for a strategy’s success, we are better able to assess the risk that the success will fail to carry over into real-world applications. That risk is simply the risk that the facts or assumptions that ground the strategy will turn out to be incorrect or incomplete.  Similarly, we are better able to assess the risk that the success will decay or deteriorate over time.  That risk is simply the risk that conditions relevant to the facts or assumptions will change in relevant ways over time.

To illustrate, suppose that I’ve discovered a short-term trading strategy that appears to work well in historical data.  Suppose further that I’ve studied the issue and am able to show why the strategy works, given certain known specific facts about the behaviors of other investors, with the help of a set of reasonable simplifying assumptions.  To determine the risk that the strategy’s success will fail to carry over into real-world applications, I need only look at the facts and assumptions and ask, what is the likelihood that they are in some way wrong, or that my analysis of their implications is somehow mistaken?  Similarly, to determine the risk that the success will decay or deteriorate over time, I need only ask, what is the likelihood that conditions relevant to the facts and assumptions might change in relevant ways?  How easy would it be for that to happen?

If I don’t understand the analytic basis for a strategy’s efficacy, I can’t do any of that. The best I can do is cross my fingers and hope that the observed past success will show up when I put real money to work in the strategy, and that it will keep showing up going forward.  If that hope doesn’t come true, if the strategy disappoints or experiences a cold spell, there won’t be any place that I can look, anywhere that I can check, to see where I might have gone wrong, or what might have changed.  My ability to stick with the strategy, and to modify it as needed in response to changes, will be significantly curtailed.

(3) An understanding of the analytic basis for a strategy’s efficacy sets boundaries on a number of other important requirements that we need to impose.  We say, for example, that a strategy should succeed in out-of-sample testing.  But some out-of-sample tests do not apply, because they do not embody the conditions that the strategy needs in order to work.  If we don’t know how the strategy works in the first place, then we have no way to know which tests those are.

To offer an example, the moving average, moving average crossover, and momentum strategies all fail miserably in out-of-sample testing in individual stocks. Should the strategies have performed well in that testing, given our understanding of how they work, how they generate outperformance?  If we don’t have an understanding of how they work, how they generate outperformance, then we obviously can’t answer the question.

Now, many would claim that it’s unreasonable to demand a complete analytic understanding of the factors behind a strategy’s efficacy.  Fair enough.  I’m simply describing what we want, not what we absolutely have to have in order to profitably implement a strategy.  If we can’t get what we want, in terms of a solid understanding of why a strategy works, then we have to settle for the next best thing, which is to take the strategy live, ideally with small amounts of money, and let the consequences dictate the rest of the story.  If the strategy works, and continues to work, and continues to work, and continues to work, and so on, then we stick with it.  When it stops working for a period of time that exceeds our threshold of patience, we abandon it.  I acknowledge that this is is a perfectly legitimate empirical approach that many traders and investors have been able to use to good effect.  It’s just a difficult and sometimes costly approach to use, particularly in situations where the time horizon is extended and where it takes a long time for the “results” to come in.

The point I’m trying to make, then, is not that the successful implementation of a strategy necessarily requires a strong analytic understanding of the strategy’s mechanism, but that such an understanding is highly valuable, worth the cost of digging to find it.  We should not just cross our fingers and hope that past patterning will repeat itself.  We should dig to understand.


In the early 1890s, when the brilliant physicist Oliver Heaviside discovered his operator method for solving differential equations, the mathematics establishment dismissed it, since he couldn’t give an analytic proof for its correctness.  All he could do was put it to use in practice, and show that it worked, which was not enough.  To his critics, he famously retorted:

“I do not refuse my dinner simply because I do not understand the process of digestion.”

His point is relevant here.  The fact that we don’t have a complete analytic understanding of a strategy’s efficacy doesn’t mean that the strategy can’t be put to profitable use.  But there’s an important difference between Heaviside’s case and the case of someone who discovers a strategy that succeeds in backtesting for unknown reasons.  If you make up a brand new differential equation that meets the necessary structure, an equation that Heaviside has never used his method on, that won’t be an obstacle for him–he will be able to use the method to solve it, right in front of your eyes.  Make up another one, he’ll solve that one. And so on. Obviously, the same sort of on-demand demonstration is not possible in the context of a market timing strategy that someone has discovered to work in past data.  All that the person can do is point to backtesting in that same stale data, or some other set of data that is likely to have high correlations to it.  That doesn’t count for very much, and shouldn’t.

Generic:  We want market timing strategies that are generic.  Generic strategies are less likely to achieve false success by “overfitting” the data–i.e., “shaping” themselves to exploit coincidences in the data that are not going to reliably recur.

An example of a generic strategy would be the instruction contained in a simple time series momentum strategy: to switch into and out of the market based on the market’s trailing one year returns.  If the market’s trailing one year returns are positive, go long, if they’re negative, go to cash or go short.  Notice that one year is a generic whole number. Positive versus negative is a generic delineation between good and bad.  The choice of these generic breakpoints does not suggest after-the-fact overfitting.

An example of the opposite of a generic strategy would be the instruction to be invested in the market if some highly refined condition is met: for example, if trailing one year real GDP growth is above 2.137934%, or if the CAPE is less than 17.39, or if bullish investor sentiment is below 21%.  Why were these specific breakpoints chosen, when so many others were possible?  Is the answer that the chosen breakpoints, with their high levels of specificity, just-so-happen to substantially strengthen the strategy’s performance in the historical data that the designer is building it in?  A yes answer increases the likelihood that the performance will unravel when the strategy is taken into the real-world.

A useful test to determine whether a well-performing strategy is sufficiently generic is the “tweak” test.  If we tweak the rules of the strategy in ways that should not appreciably affect its performance, does its performance appreciably suffer?  If the answer is yes, then the strength of the performance is more likely to be specious.

Efficient:  Switching into and out of the market inflicts gap losses, slip losses, transaction costs, and, if conducted unskillfully, realized tax liabilities, each of which represents a guaranteed negative hit to performance.  However, the positive benefits of switching–the generation of outperformance through the avoidance of drawdowns–are not guaranteed. When a strategy breaks, the positive benefits go away, and the guaranteed negative hits become our downside.  They can add up very quickly, which is why we want market timing strategies that switch efficiently, only when the probabilities of success in switching are high.

Long-Biased:  Over long investment horizons, risk assets–in particular, equities–have a strong track record of outperforming safe assets.  They’ve tended to dramatically overcompensate investors for the risks they’ve imposed, generating total returns that have been significantly higher than would have been necessary to make those risks worth taking.  As investors seeking to time the market, we need to respect that track record.  We need to seek out strategies that are “long-biased”, i.e., strategies that maximize their exposure to risk assets, with equities at the top of the list, and minimize their exposure to safe assets, with cash at the bottom of the list.

Psychologists tell us that, in life, “rational optimists” tend to be the most successful.  We can probably extend the point to market timing strategies.  The best strategies are those that are rationally optimistic, that default to constructive, long positions, and that are willing to shift to other positions, but only when the evidence clearly justifies it.

Recently Successful: When all else is equal, we should prefer market timing strategies that test well in recent data and that would have performed well in recent periods of history. Those are the periods whose conditions are the most likely to share commonalities with current conditions, which are the conditions that our timing strategies will have to perform in.

Some would perjoratively characterize our preference for success in recent data as a “This Time is Different” approach–allegedly the four most dangerous words in finance.  The best way to come back at this overused cliché is with the words of Josh Brown: “Not only is this time different, every time is different.”  With respect to testing, the goal is to minimize the differences.  In practice, the way to do that is to favor recent data in the testing. Patterns that are found in recent data are more likely to have arisen out of causes that are still in the system.  Such patterns are therefore more likely to arise again.

The Moving Average Strategy

In 2006, Mebane Faber of Cambria Investments published an important white paper in which he introduced a new strategy for implementing the time-honored practice of trend-following.  His proposed strategy is captured in the following steps:

(1) For a given risk asset, at the end of each month, check the closing price of the asset.

(2) If the closing price is above the average of the 10 prior monthly closes, then go long the asset and stay long through the next month.

(3) If the price is below the average of the 10 prior monthly closes, then go to cash, or to some preferred proxy for a safe asset, and stay there through the next month.

Notably, this simple strategy, if implemented when Faber proposed it, would have gone on to protect investors from a 50% crash that began a year later.  After protecting investors from that crash, the strategy would have placed investors back into long positions in the summer of 2009, just in time to capture the majority of the rebound.  It’s hard to think of many human market timers that managed to perform better, playing both sides of the fence in the way that the strategy was able to do.  It deserves respect.

To make the strategy cleaner, I would offer the following modification: that the strategy switch based on total return rather than price. When the strategy switches based on total return, it puts all security types on an equal footing: those whose prices naturally move up over time due to the retention of income (e.g., growth equities), and those that do not retain income and whose prices therefore cannot sustainably move upwards (e.g., high-yield bonds).

Replacing price with total return, we arrive at the following strategy:

(1) For a given risk asset, at the end of each month, check the closing level of the asset’s total return index.  (Note: you can quickly derive a total return index from a price index by subtracting, from each price in the index, the cumulative dividends that were paid after the date of that price.)

(2) If the closing level of the total return index is above the average of the 10 prior monthly closing levels, then go long the asset and stay long through the next month.

(3) If the closing level of the total return index is below the average of the 10 prior monthly closing levels, then go to cash, or to some preferred proxy for a safe asset, and stay there through the next month.

We will call this strategy MMA, which stands for Monthly Moving Average strategy. The following chart shows the performance of MMA in the S&P 500 from February of 1928 to November of 2015.  Note that we impose a 0.6% slip loss on each round-trip transaction, which was the average bid-ask spread for large company stocks in the 1928 – 2015 period:


(For a precise definition of each term in the chart, click here.)

The blue line in the chart is the total return of MMA.  The gray line is the total return of a strategy that buys and holds the risk asset, abbreviated RISK.  In this case, RISK is the S&P 500.  The black line on top of the gray line, which is difficult to see in the current chart, but which will be easier to see in future charts, is the moving average line.  The yellow line is the total return of a strategy that buys and holds the safe asset, abbreviated SAFE.  In this case, SAFE is the three month treasury bill, rolled over on a monthly basis. The purple line is the opposite of MMA–a strategy that is out when MMA is in, and in when MMA is out.  It’s abbreviated ANTI.  The gray columns are U.S. recession dates.

The dotted green line shows the timing strategy’s cumulative outperformance over the risk asset, defined as the ratio of the trailing total return of the timing strategy to the trailing total return of a strategy that buys and holds the risk asset.  It takes its measurement off of the right y-axis, with 1.0 representing equal performance.  When the line is ratcheting up to higher numbers over time, the strategy is performing well.  When the line is decaying down to lower numbers over time, the strategy is performing poorly.

We can infer the strategy’s outperformance over any two points in time by examing what happens to the green line.  If the green line ends up at a higher place, then the strategy outperformed.  If it ends up at a lower place, then the strategy underperformed.  As you can see, the strategy dramatically outperformed from the late 1920s to the trough of the Great Depression (the huge spike at the beginning of the chart).  It then underperformed from the 1930s all the way through to the late 1960s.  From that point to now, it’s roughly equal performed, enjoying large periods of outperformance during market crashes, offset by periods of underperformance during the subsequent rebounds, and a long swath of underperformance during the 1990s.

Now, it’s not entirely fair to be evaluating the timing strategy’s performance against the performance of the risk asset.  The timing strategy spends a significant portion of its time invested in the safe asset, which has a lower return, and a lower risk, than the risk asset. We should therefore expect the timing strategy to produce a lower return, with a lower risk, even when the timing strategy is improving the overall performance.

The appropriate way to measure the performance of the timing strategy is through the use of what I call the “X/Y portfolio”, represented by the red line in the chart.  The X/Y portfolio is a mixed portfolio with an allocation to the risk asset and the safe asset that matches the timing strategy’s cumulative ex-post exposure to each asset.  In the present case, the timing strategy spends roughly 72% of its time in the risk asset, and roughly 28% of its time in the safe asset.  The corresponding X/Y portfolio is then a 72/28 risk/safe portfolio, a portfolio continually rebalanced to hold 72% of its assets in the S&P 500, and 28% of its assets in treasury bills.

If a timing strategy were to add exactly zero value through its timing, then its performance–its return and risk–would be expected to match the performance of the corresponding X/Y portfolio.  The performances of the two strategies would be expected to match because their cumulative asset exposures would be identical–the only difference would be in the specific timing of the exposures.  If a timing strategy can consistently produce a better return than the corresponding X/Y portfolio, with less risk, then it’s necessarily adding value through its timing.  It’s taking the same asset exposures and transforming them into “something more.”

When looking at the charts, then, the way to assess the strategy’s skill in timing is to compare the blue line and the red line.  If the blue line is substantially above the red line, then the strategy is adding positive value and is demonstrating positive skill.  If the blue line equals the red line to within a reasonable statistical error, then the strategy is adding zero value and is demonstrating no skill–the performance equivalent of randomness.  If the blue line is substantially below the red line, then the strategy is adding negative value and is demonstrating negative skill.

The following table shows the entry-exit dates associated with the previous chart:


(For a precise definition of each term in the table, click here.)

Each entry-exit pair (a sale followed by a purchase) produces a relative gain or loss on the index.  That relative gain or loss is shown in the boxes in the “Gain” column, which are shaded in green for gains and in red for losses.  You can quickly look at the table and evaluate the frequency of gains and losses by gauging the frequency of green and the red.

What the table is telling is that the strategy makes the majority of its money by avoiding large, sustained market downturns.  To be able to avoid those downturns, it has to accept a large number of small losses associated with switches that prove to be unnecessary. Numerically, more than 75% of all of MMA’s trades turn out to be losing trades. But there’s a significant payout asymmetry to each trade: the average winning trade produces a relative gain of 26.5% on the index, whereas the average losing trade only inflicts a relative loss of -6.0%.

Comparing the Results: Two Additional Strategies

In addition to Faber’s strategy, two additional trend-following strategies worth considering are the moving average crossover strategy and the momentum strategy.  The moving average crossover strategy works in the same way as the moving average strategy, except that instead of comparing the current value of the price or total return to a moving average, it compares a short horizon moving average to a long horizon moving average. When the short horizon moving average crosses above the long horizon moving average, a “golden cross” occurs, and the strategy goes long.  When the short horizon moving average crosses below the long horizon moving average, a “death cross” occurs, and the strategy exits.  The momentum strategy also works in the same way as the moving average strategy, except that instead of comparing the current value of the price or total return to a moving average, it compares the current value to a single prior value–usually the value from 12 months ago.

The following table shows the U.S. equity performance of Faber’s version of the moving average strategy (MMA-P), our proposed total return modification (MMA-TR), the moving crossover strategy (CROSS), and the momentum strategy (MOMO) across a range of possible moving average and momentum periods:


If you closely examine the table, you will see that MMA-TR, MMA-P, and MOMO are essentially identical in their performances.  The performance of CROSS diverges negatively in certain places, but the comparison is somewhat artificial, given that there’s no way to put CROSS’s two moving average periods onto the same basis as the single periods of the other strategies.

Despite similar performances in U.S. equities, we favor MMA-TR over MMA-P because MMA-TR is intuitively cleaner, particular in the fixed income space. In that space, MMA-P diverges from the rest of the strategies, for the obvious reason that fixed income securities do not retain earnings and therefore do not show an upward trend in their prices over time.  MMA-TR is also easier to backtest than MMA-P–only one index, a total return index, is needed. For MMA-P, we need two indices–a price index that decides the switching, and a total return index that calculates the returns.

We favor MMA-TR over MOMO for a similar reason.  It’s intuitively cleaner than MOMO, since it compares the current total return level to an average of prior levels, rather than a single prior level.  A strategy that makes comparisons to a single prior level is vulnerable to single-point anomalies in the data, whereas a strategy that makes comparison to an average of prior levels will smooth those anomalies out.

We’re therefore going to select MMA-TR to be the representative trend-following strategy that we backtest out-of-sample.  Any conclusions that we reach will extend to all of the strategies–particularly MMA-P and MOMO, since their structures and performances are nearly identical to that of MMA-TR.  We’re going to use 10 months as the moving average period, but not because 10 months is special.  We’re going to use it because it’s the period that Faber used in his original paper, and because it’s the period that just-so-happens to produce the best results in U.S. equities.

Changing Moving Average Periods: A Tweak Test

Settling on a 10 month moving average period gives us our first opportunity to apply the “tweak” test.  With respect to the chosen moving average period, what makes 10 months special?  Why not use a different number: say, 6, 7, 8, 9, 11, 15, 20, 200 and so on?  The number 10 is ultimately arbitrary, and therefore the success of the strategy should not depend on it.

Fortunately, when we apply a reasonable range of numbers other than 10 to the strategy, we obtain similarly positive results, in satisfaction of the “tweak” test.  The following table shows the performance of the strategy under moving average periods ranging from 1 month to 300 months, with the performance of 10 months highlighted in yellow:


Evidently, the strategy works well for all moving average periods ranging from around 5 months to around 50 months.  When periods below around 5 months are used, the strategy ends up engaging in excessive unnecessary switching.  When periods greater than around 50 months are used, the moving average ends up lagging the index by such a large amount that it’s no longer able to switch when it needs to, in response to valid signs of impending downtrends.

The following two charts illustrate the point.  In the first chart, a 1 month period is used. The strategy ends up switching in roughly 46% of all months–an egregiously high percentage that indicates significant inefficiency.  In the second chart, a 300 month period is used.  The strategy ends up completely impotent–it never switches, not even a single time.

1month 300month

(For a precise definition of each term in the chart, click here.)

Evaluating the Strategy: Five Desired Qualities

Earlier, we identified five qualities that we wanted to see in market timing strategies.  They were: analytic, generic, efficient, long-biased, and recently-successful.  How does MMA far on those qualities?  Let’s examine each individually.

Here, again, are the chart and table for the strategy’s performance in U.S. equities:



(For a precise definition of each term in the chart and table, click here.)

Here are the qualities, laid out with grades:

Analytic?  Undecided.  Advocates of the strategy have offered behavioral explanations for its efficacy, but those explanations leave out the details, and will be cast into doubt by the results of the testing that we’re about to do.  Note that in the next piece, we’re going to give an extremely rigorous account of the strategy’s functionality, an account that will hopefully make all aspect of its observed performance–its successes and its failures–clear.

Generic?  Check.  We can vary the moving average period anywhere from 5 to 50 months, and the strategy retains its outperformance over buy and hold.  Coincidences associated with the number 10 are not being used as a lucky crutch.

Efficient?  Undecided.  The strategy switches in 10% of all months.  On some interpretations, that might be too much.  The strategy has a switching win rate of around 25%, indicating that the majority of the switches–75%–are unnecessary and harmful to returns.  But, as the table confirms, the winners tend to be much bigger than the losers, by enough to offset them in the final analysis.  We can’t really say, then, that the strategy is inefficient.  We leave the verdict at undecided.

Long-Biased?  Check.  The strategy spends 72% of its time in equities, and 28% of its time in cash, a healthy ratio.  The strategy is able to maintain a long-bias because the market has a persistent upward total return trend over time, a trend that causes the total return index to spend far more time above the trailing moving average than below.

On a related note, the strategy has a beneficial propensity to self-correct.  When it makes an incorrect call, the incorrectness of the call causes it to be on the wrong side of the total return trend.  It’s then forced to get back on the right side of the total return trend, reversing the mistake.  This propensity comes at a cost, but it’s beneficial in that prevents the strategy from languishing in error for extended periods of time. Other market timing approaches, such as approaches that try to time on valuation, do not exhibit the same built-in tendency.  When they get calls wrong–for example, when they wrongly estimate the market’s correct valuation–nothing forces them to undo those calls.  They get no feedback from the reality of their own performances.  As a consequence, they have the potential to spend inordinately long periods of time–sometimes decades or longer–stuck out of the market, earning paltry returns.

Recently Successful?  Check.  The strategy has outperformed, on net, since the 1960s.

Cowles Commission Data: Highlighting a Key Testing Risk

Using data compiled by the Cowles Commission, we can conduct our first out-of-sample test on the strategy.  The following chart shows the strategy’s performance in U.S. equities back to the early 1870s.  We find that the strategy performs extremely well, beating the X/Y portfolio by 210 bps, with a substantially lower drawdown.


The strong performance, however, is the consequence of a hidden mistake.  The Cowles Commission prices that are available for U.S. equities before 1927 are not closing prices, but averages of high and low prices for the month.  In allowing ourselves to transact at those prices, we’re effectively cheating.


The point is complicated, so let me explain.  When the index falls below the moving average, and we sell at the end of the month at the quoted Cowles monthly price, we’re essentially letting ourselves sell at the average price for that month, a price that’s no longer available, and that’s likely to be higher than the currently available price, given the downward price trend that we’re acting on.  The same holds true in reverse.  When the index moves above the average, and we buy in at the end of the month, we’re essentially letting ourselves buy in at the average price for the month, a price that’s no longer available, and that’s likely to be lower than the closing price, given the upward price trend that we’re acting on.  So, in effect, whenever we sell and buy in this way, we’re letting ourselves sell higher, and buy lower, than would have been possible in real life.

To use the Cowles Commission data and not cheat, we need to insert a 1 month lag into the timing.  If, at the end of a month, the strategy tells us to sell, we can’t let ourselves go back and sell at the average price for that month.  Instead, we have take the entirety of the next month to sell, selling a little bit on each day.  That’s the only way, in practice, to sell at an “average” monthly price.  Taking this approach, we get a more truthful result.  The strategy still outperforms, but by an amount that is more reasonable:


(For a precise definition of each term in the chart and table, click here.)

To avoid this kind of inadvertent cheating in our backtests, we have to make extra sure that the prices in any index that we test our strategies on are closing monthly prices. If an index is in any way put together through the use of averaging of different prices in the month–and some indices are put together that way, particularly older indices–then a test of the moving average strategy, and of all trend-following strategies more generally, will produce inaccurate, overly-optimistic results.

The Results: MMA Tested in 235 Indices and 120 Individual Securities

We’re now ready for the results.  I’ve divided them in into eleven categories: U.S. Equities, U.S. Factors, U.S. Industries, U.S. Sectors, Foreign Equities in U.S. Dollar Terms, Foreign Equities in Local Currency Terms, Global Currencies, Fixed Income, Commodities, S&P 500 Names, and Bubble Roadkill Names.

In each test, our focus will be on three performance measures: Annual Total Return (reward measure), Maximum Drawdown (risk measure), and the Sortino Ratio (reward-to-risk measure).  We’re going to evaluate the strategy against the X/Y portfolio on each of these measures.  If the strategy is adding genuine value through its timing, our expectation is that it will outperform on all of them.

For the three performance measures, we’re going to judge the strategy on its win percentage and its excess contribution.  The term “win percentage” refers to the percentage of individual backtests in a category that the strategy outperforms on.  We expect strong strategies to post win percentages above 50%.  The terms “excess annual return”, “excess drawdown”, and “excess Sortino” refer to the raw numerical amounts that the strategy increases those measures by, relative to the X/Y portfolio and fully invested buy and hold.  So, for example, if the strategy improves total return from 8% to 9%, improves drawdown from -50% to -25%, and increases the Sortino Ratio from 0.755 to 1.000, the excess annual return will be 1%, the excess drawdown will be +25%, and the excess Sortino will be 0.245.  We will calculate the excess contribution of the strategy for a group of indices by averaging the excess contributions of each index in the group.

The Sortino Ratio, which will turn out to be the same number for both the X/Y portfolio and a fully invested buy and hold portfolio, will serve as the final arbiter of performance. If a strategy conclusively outperforms on the Sortino Ratio–meaning that it delivers both a positive excess Sortino Ratio, and a win percentage on the Sortino Ratio that is greater than 50%–then we will deliver a verdict of “Outperform.”  Otherwise, we will deliver a verdict of “Underperform.”

Now, to the results:

(Note: if you have questions on how to read the charts and tables, or on how terms are defined conceptually or mathematically, click here for a guide.)

U.S. Equities, 1871 – 2015: The strategy was tested in U.S. equities across different date ranges and under different choices of safe assets (treasury bills, 10 year treasury notes, investment-grade corporate bonds, and gold). Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

U.S. Size, Momentum, and Value Factor Indices, 1928 – 2015: The strategy was tested in 30 different U.S. factor indices–size, momentum, and value, each separated into 10 decile indices.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

30 U.S. Industries, 1928 – 2015: The strategy was tested in 30 different U.S. industry indices.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

10 U.S. Sectors, 1928 – 2015: The strategy was tested in 10 different U.S. sector indices. Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

Foreign Equities in U.S. Dollar Terms, 1971 – 2015: The strategy was tested in 77 foreign country equity indices, quoted in U.S. dollar terms.  A side test on popular Ishares country ETFs was included.  Interestingly, the performance in the Ishares ETFs was worse than the performance in the country indices.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

Foreign Equities in Local Currency Terms, 1971 – 2015: The strategy was tested in 32 different foreign country equity indices, quoted in local currency terms.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

Foreign Equities in Local Currency Terms, 1901 – 1971: The strategy was tested in 8 different foreign country equity indices, quoted in local currency terms, going back to a much earlier period of history.  Verdict: Outperform.

Global Currencies, 1973 – 2015: The strategy was tested in 22 global currency pairs. Verdict: Outperform.  The strategy’s performance in currency was its strongest performance of all.  Click here and scroll down to see a slideshow of the charts and tables.

Fixed Income, 1928 – 2015: The strategy was tested in 11 different fixed income indices. Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

Commodities, 1947 – 2015: The strategy was tested in 2 different commodity indices–spot gold and spot oil.  Testing in rolled futures contract indices was also conducted, but is not worth including, given the awful performance of a buy and hold strategy in these indices, particularly over the last 10 years, where the futures chains have spent most of their time in contango, inflicting negative roll yields.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

100 Largest S&P 500 Stocks, 1963 – 2015: The strategy was tested in the largest 100 S&P 500 stocks that have been continuously publicly traded for at least 20 years.  In contrast to the other tests, the strategy’s performance in this test was terrible.  Not only did the strategy fail to add any value, it actually subtracted value, producing significantly inferior return and risk numbers relative to the X/Y portfolio, despite taking on the same cumulative exposures.  Verdict: Underperform.  Click here and scroll down to see a slideshow of the charts and tables.

Bubble Roadkill Sample, 1981 – 2015: The strategy performed so poorly in the test on individual large company stocks that we decided to try and see if we could come up with a sample of individual company stocks in which the strategy did work.  So we ran the strategy in the context of individual companies that have experienced large boom-bust cycles, and that are now nearly worthless, at least relative to their prior market capitalizations.  Examples include notorious tech names that boomed in the 90s and busted at the turn of the century, notorious housing and finance names that boomed in the early-to-mid aughts and busted in the Global Financial Crisis, and notorious commodity names that boomed in the aughts and that are busting as we speak.  The expectation was that the strategy’s performance in these names would improve significantly, given the opportunity to ride a boom and exit prior to a terminal bust.  The results showed that the performance did, in fact, improve–but the improvement wasn’t as large as hoped for.  The strategy strongly underperformed  in a number of busted names–e.g., Freeport McMoran, Aeropostale, MBIA, and Q-Logic.  Verdict: Outperform.  Click here and scroll down to see a slideshow of the charts and tables.

The following table summarizes the strategy’s performance across all tests on the criteria of Annual Total Return.  The excess total returns and total return win percentages are shown relative to the X/Y portfolio and a portfolio that’s fully invested in the risk asset, abbreviated RISK.


The performance is excellent in all categories except the individual S&P 500 stock category, where the performance is terrible.  In the individual S&P 500 stock category, the strategy produces consistently negative excess returns and a below 50% win percentage relative to X/Y.  In roughly 3 out of 4 of the sampled individual stocks, the strategy earns a total return that is less than the total return of a portfolio that takes on the same risk exposure without doing any timing.  What this means is that with respect to total return, the strategy’s timing performance in the category is worse than what random timing would be expected to produce.

The following table summarizes the strategy’s performance on the criteria of Maximum Drawdown.  The excess drawdowns and drawdown win percentages are shown relative to the X/Y portfolio and a portfolio that’s fully invested in the risk asset, abbreviated RISK.


Note that there’s some slight underperformance relative to the X/Y portfolio in foreign equities and global currencies.  But, as we will see when we look at the final arbiter of performance, the Sortino Ratio, the added return more than makes up for the increased risk.  Once again, the strategy significantly underperforms in the individual S&P 500 stock category, posting a below 50% win percentage and exceeding the X/Y portfolio’s drawdown.  As before, with respect to drawdown risk, the strategy’s timing decisions in the category end up being worse than what random timing would be expected to produce.

The following table summarizes the strategy’s performance on the criteria of the Sortino Ratio, which we treat as the final arbiter of performance.   The excess Sortinos and Sortino win percentages for the strategy are shown relative to the X/Y portfolio and a portfolio that’s fully invested in the risk asset, abbreviated RISK.


The performance is excellent in all categories except the individual S&P 500 stock category.  Importantly, the excess Sortinos for foreign equities and global currencies are firmly positive, confirming that the added return is making up for the larger-than-expected excess drawdown and lower-than-expected drawdown win percentages noted in the previous table.

The strategy’s performance in individual S&P 500 securities, however, is terrible.  In 4 out of 5 individual S&P 500 stocks, the strategy produces a Sortino ratio inferior to that of buy and hold.  This result again tells us that on the criterion of risk-reward, the strategy’s timing performance in the category is worse than what random timing would be expected to produce.

To summarize, MMA strongly outperforms the X/Y portfolio on all metrics and in all test categories except for the individual S&P 500 stock category, where it strongly underperforms.  If we could somehow eliminate that category, then the strategy would pass the backtest with flying colors.

Unfortunately, we can’t ignore the strategy’s dismal performance in the individual S&P 500 stock category.  The performance represents a failed out-of-sample test in what was an extremely large sample of independent securities–100 in total, almost a third of the entire backtest.  It is not a result that we predicted, nor is it a result that fits with the most common explanations for why the strategy works.  To make matters worse, most of the equity and credit indices that we tested are correlated with each other. And so the claim that the success in the 250 indices should count more in the final analysis than the failure in the 100 securities is questionable.

A number of pro-MMA and anti-MMA explanations can be given for the strategy’s failure in individual securities. On the pro-MMA side, one can argue that there’s survivorship bias in the decision to use continuously traded S&P 500 stocks in the test, a bias that reduces the strategy’s performance.  That category of stocks is likely to have performed well over the years, and unlikely to have included the kinds of stocks that generated deep drawdowns.  Given that the strategy works by protecting against downside, we should expect the strategy to underperform in the category.  This claim is bolstered by the fact that the strategy performed well in the different category of bubble roadkill stocks.

On the anti-MMA side, one can argue that the “stale price” effect discussed in an earlier piece creates artificial success for the strategy in the context of indices.  That success then predictably falls away when individual stocks are tested, given that individual stocks are not exposed to the “stale price” effect.  This claim is bolstered by the fact that the strategy doesn’t perform as well in Ishares MSCI ETFs (which are actual tradeable individual securities) as it does in the MSCI indices that those ETFs track (which are idealized indices that cannot be traded, and that are subject to the “stale price” effect, particularly in illiquid foreign markets).

The following table shows the two performances side by side for 14 different countries, starting at the Ishares ETF inception date in 1996:


As the table confirms, the strategy’s outperformance over the X/Y portfolio is significantly larger when the strategy is tested in the indices than when it’s tested in the ETF securities that track the indices.  Averaging all 14 countries together, the total return difference between the strategy and the X/Y portfolio in the indices ends up being 154 bps higher than in the ETF securities.  Notice that 154 bps is roughly the average amount that the strategy underperforms the X/Y portfolio in the individual S&P 500 stock category–probably a coincidence, but still interesting.

In truth, none of these explanations capture the true reason for the strategy’s underperformance in individual stocks.  That reason goes much deeper, and ultimately derives from certain fundamental geometric facts about how the strategy operates.  In the next piece, I’m going to expound on those facts in careful detail, and propose a modification to the strategy based on them, a modification that will substantially improve the strategy’s performance.  Until then, thanks for reading.

Links to backtests: [U.S. EquitiesU.S. FactorsU.S. IndustriesU.S. SectorsForeign Equities in USDForeign Equities in Local CurrencyGlobal CurrenciesFixed IncomeCommoditiesLargest 100 Individual S&P 500 StocksBubble Roadkill]

(Disclaimer: The information in this piece is personal opinion and should not be interpreted as professional investment or tax advice.  The author makes no representations as to the accuracy, completeness, suitability, or validity of any of the information presented.)

Posted in Uncategorized | Comments Off on Trend Following In Financial Markets: A Comprehensive Backtest