B.F. Skinner and Operant Conditioning: A Primer for Traders, Investors, and Economic Policymakers

skinner4Markets and economies are agglomerations of interconnected human behaviors.  It’s a surprise, then, that in the fields of finance and economics, the work of history’s most famous behavioral psychologist, B.F. Skinner, is rarely mentioned. In this piece, I’m going to present an introduction to Skinner’s general theory of behavior, drawing attention to insights from his research that can be applied to trading, investing, and economic policymaking.  The current piece will serve as a primer for the next one, in which I’m going to discuss the insights with a greater practical emphasis.

If you’re like most, you come to this blog to read about finance and economics, not about psychology or philosophy, so you’re probably ready to close the window and surf on to something else.  But I would urge you to read on.  Skinner’s work was deep and profound–brimming with insights into the way reality and human beings work.  Anyone interested in finance and economics will benefit from being familiar with it.

Pavlovian Conditioning, Operant Conditioning and Selection by Consequence

In the early 1900s, Russian physiologist Ivan Pavlov conducted experiments on canine digestion.  He exposed restrained dogs to the scent of meat powder, and measured the extent to which they salivated in response to it.  In the course of these experiments, he stumbled upon a groundbreaking discovery: Dogs that had been put through experiments multiple times would salivate before any meat powder was presented, in response to the mere sight of lab assistants entering the room.

Pavlov hypothesized that repeated associations between “lab assistants” and “the smell of meat” had conditioned the dogs to respond to the former in the same way as the latter–by salivating.  To test this hypothesis, Pavlov set up another experiment.   He rang a bell for the dogs to hear, and then exposed them to the scent of meat powder.  He found that after repeated associations, the dogs would salivate in response to the mere sound of the bell, before any meat powder was presented.

Around the same time that Pavlov conducted his experiments on salivation in dogs, the American psychologist Edward Thorndike conducted experiments on learning in cats.  In these experiments, Thorndike trapped cats inside of “puzzle” boxes that could only be opened by pushing on various built-in levers.  After trapping the cats, he timed how long it took them to push on the levers and escape.  When they escaped, he rewarded them with food and put them back inside the boxes to escape again. He noticed that cats that had successfully escaped took sequentially less time to escape on each subsequent trial.  He concluded that the cats were “learning” from the trials.

In the late 1930s, Harvard psychologist B.F. Skinner synthesized the discoveries of Pavlov, Thorndike, and others into a coherent system, called Behaviorism.  Behaviorism sought to explain the behaviors of organisms, to include the behaviors of human beings, purely mechanistically, in terms of causal interactions with the environment, rather than in terms of nebulous, unscientific concepts inherited from religious tradition: “soul”, “spirit”, “free-will”, etc.

Skinner distinguished between two types of conditioning:

Classical Conditioning: The kind of conditioning that Pavlov discovered, which involves the repeated association of two stimuli–an unconditioned stimulus (the smell of meat) and a conditioned stimulus (the sound of a bell)–in a way that causes the conditioned stimulus (the sound of a bell) to evoke the same response (salivation) as the unconditioned stimulus (the smell of meat).  The unconditioned stimulus (the smell of meat) is called “unconditioned” because its connection to the response (salivation) is hard-wired into the organism.  The conditioned stimulus (the sound of a bell) is called “conditioned” because its connection to the response (salivation) is not hard-wired, but rather is formed through the “conditioning” process, i.e., the process of changing the organism through exposure.

Operant Conditioning: The kind of conditioning that Thorndike discovered, wherein the subsequent frequency of an organism’s behavior is increased or decreased by the consequences of that behavior.  When behavior is followed by positive outcomes (benefit, pleasure), the behavior goes on to occur more often; when behavior is followed by negative outcomes (harm, pain), the behavior goes on to occur less often, if at all.  Operant conditioning differs from Pavlovian conditioning in that it involves the learning of a voluntary behavior by the consequences of that behavior, rather than the triggering of an automatic, involuntary response by exposure to repeated associations.

Skinner is known in popular circles for the fascinating experiments that he conducted on the conditioning, experiments in which he used the technique to get animals to do all kinds of weird, unexpected things.  In the following clip, Skinner shares the result of one such experiment, an experiment in which he successfully taught pigeons to “read” English:

Skinner liked to explain operant conditioning in terms of the analogue of evolution.  Recall that in biological evolution, random imperfections in the reproductive process lead to infrequent mutations.  These mutations typically add zero or negative value to the organism’s fitness.  But every so often, purely by chance, the mutations end up conferring advantages that aid in survival and reproduction.  Organisms endowed with the mutations go on to survive and reproduce more frequently than their counterparts, leaving more copies of the mutations in subsequent generations, until the mutations become endemic to the entire reproductive population. That is how the adapted species is formed. We human beings, with these complex brains and bodies, are direct descendents of those organisms–human and pre-human–that were “lucky” enough to be endowed with the “best” mutations of the group.

Biological evolution involves what Skinner brilliantly called “selection by consequence.” Nature continually “tries out” random possible forms.  When the forms bring good consequences–i.e., consequences that lead to the survival and successful self-copying of the forms–it holds on to them.  When they bring bad consequences–i.e., consequences that lead to the death of the forms–it discards them.  Through this process of trial-and-error, it extracts order from chaos.  There is no other way, according to Skinner, for nature to create complex, self-preserving systems–biological or otherwise.  It has no innate “intelligence” from which to design them, no ability to foresee survivable designs beforehand based on a thought process.

Skinner viewed animal organisms, to include human beings, as a microcosm for the same evolutionary process–“selection by consequence.”  An animal organism, according to Skinner, is a highly complex behavior selection machine.  As it moves through its environment, it is exposed to different types of behaviors–some that it tries out on its own, randomly, or in response to causal stimuli, and some that it observes others engage in.  When the behaviors produce positive consequences (benefit, pleasure, etc.), its brain and psychology are modified in ways that cause it to engage in them more often.  When the behaviors produce negative consequences (harm, pain, etc.), its brain and psychology are modified in ways that cause it to refrain from them in the future.  Through this process, the process of operant conditioning, the organism “learns” how to interact optimally with the contingencies of its environment.

According to Skinner, brains with the capacity for operant conditioning are themselves consequences of evolution.  Environmental conditions are always changing, and therefore the specific environment that an organism will face cannot be fully known beforehand.  For this reason, Nature evolved brains that have the capacity to form optimal behavioral tendencies based on environmental feedback, rather than brains that have been permanently locked into a rigid set of behaviors from the get-go.

Contrary to popular caricature, Skinner did not think that animal organisms–human or otherwise–were “blank slates.”  He acknowledged that they have certain unchangeable, hard-wired biological traits, put in place by natural selection.  His point was simply that one of those traits, a hugely important one, is the tendency for certain of behaviors of organisms–specifically, “voluntary” behaviors, those that arise out of complex information processing in higher regions of the brain–to be “learned” by operant conditioning, by the consequences that reality imposes.

The Mechanics of Operant Conditioning: Reinforcement and Punishment

Skinner categorized the feedback processes that shape behaviors into two general types: reinforcement and punishment.  Reinforcement occurs when a good–i.e., pleasurable–consequence follows a behavior, causing the behavior to become more frequent–or, in non-Skinnerian cognitive terms, causing the organism to experience an increased desire to do the behavior again.  Punishment occurs when a bad–i.e., painful–consequence follows a behavior, causing the behavior to become less frequent–or, in non-Skinnerian terms, causing the organism to experience an aversion to doing the behavior again.  

In the following clip, Skinner demonstrates the technique of operant conditioning, using it to get a live pigeon to turn 360 degrees:

Skinner starts by putting the pigeon near a machine that dispenses food on a push-button command.  He then waits for the pigeon to turn slightly to its left.  In terms of the analogue of biological evolution, this period of waiting is analogous to the period wherein Nature waits for the reproductive process to produce a mutation that it can then “select by consequence.”  The pigeon’s turn is not something that Skinner can force out of the pigeon–it’s a behavior that has to randomly emerge, as the pigeon tries out different things in its environment.

When Skinner sees the turn happen, he quickly pushes the button and dispenses the reward, food.  He then waits for the pigeon to turn again–which the pigeon does, because the pigeon starts to catch on.  But this time, before dispensing the food, he waits for the pigeon to turn a bit farther.  Each time, he waits for the pigeon to turn farther and farther before dispensing the food, until the pigeon has turned a full 360 degrees.  At that point, the task is complete.  The pigeon keeps fully turning, and he keeps feeding it after it does so.

What is actually happening in the experiment?  Answer: the pigeon’s brain and psychology are somehow being modified to associate turning 360 degrees with food, such that whenever the pigeon is hungry and wants food, it turns 360 degrees.  If we want, we can describe the modification as a modification in a complex neural system, a physical brain that gets rewired to send specific motor signals–“turn left, all the way around”–in response to biological signals of hunger.  We can also describe the process cognitively, as involving an acquired feeling that arises in the pigeon–that when the pigeon gets hungry, it feels an urge or impetus to turn 360 degrees to the left, either automatically, or because it puts two and two together in a thinking process that connects the idea of turning with the idea of receiving food, which it wants.  Skinner famously preferred the former, the non-cognitive description, arguing that cognitive descriptions are unobservable and therefore useless to a science of behavior.  But cognitive descriptions work fine in the current context.

To keep the conditioned behavior in place, the conditioner needs to maintain the reinforcement.  If the reinforcement stops–if the pigeon turns, and nothing happens, and then turns again, and nothing happens again, and so on–the behavior will eventually disappear.  This phenomenon is called “extinction.”  It’s a phenomenon that Pavlov also observed: if the association between the bell and the arrival of meat powder is not maintained over time, the dogs will stop salivating in response to the bell.  

Importantly, the capacity for conditioned behavior to go extinct in the absence of reinforcement is itself a biological adaption.  Learning to behave optimally isn’t just about learning to do certain things, it’s also about unlearning them when they stop working.  An organism that is unable to unlearn behaviors that have stopped working will waste large amounts of time and energy doing useless things, and will end up falling behind in the evolutionary race. 

Skinner noted that effective reinforcement needs to be clearly connectable to the behavior, preferably close to it in time.  If food appears 200 days after the pigeon turns, the pigeon is not going to develop a tendency to turn.  The connection between turning and receiving food is not going to get appropriately wired into the pigeon’s brain.  At the same time, the reward doesn’t have to be delivered after every successful instance of the behavior.  A “variable” schedule of reinforcement can be imposed, in which the reward is only delivered after a certain number of successful instances, provided that the number is not too high.

Skinner noted that when an organism observes a consequence in response to a behavior, it “generalizes.”  It experiments with similar behaviors, to see if they will produce the same consequence.  For example, the pigeon who received food by pecking a disk in the first video will start trying to peck similar objects, in the hopes that pecking them will produce a similar release of food.  Eventually, after sufficient modification by the environment, the organism learns to “discriminate.”  It learns that the behavior produces a consequence in one situation, but not in another.

Extension to Human Beings: The Example of Gambling

The natural inclination is to dismiss Skinner’s discoveries as only being applicable to the functioning of “lesser” organisms–rats, pigeons, dogs, and so on–and not applicable to the functioning of human beings.  But the human brain, Skinner argued, is just a more computationally advanced version of the brains of these other types of organisms.  The human brain comes from the same common place that they come from, having been progressively designed by the same designer, natural selection.  We should therefore expect the same kind of learning process to be present in it, albeit in a more complex, involved form. Skinner demonstrated that it was present, in experiments on both human children and human adults.

gamblingPsychologists have long since struggled with the question, why do human beings gamble? Gambling is an obviously irrational behavior–an individual takes on risk in exchange for an expected return that is less than zero.  Why would anyone do that? Marx famously thought that people, particularly the masses, do it to escape from the stresses of industrialization.  Freud famously thought that people–at least certain men, the clients he diagnosed–do it to unconsciously punish themselves for unconscious guilt associated over the oedipal complex–the sexual attraction that they unconsciously feel–or at least unconsciously felt, as children–for their mothers.

Contra Marx and Freud, Skinner gave the first intellectually respectable psychological answer to the question.  Human beings gamble, and enjoy gambling, even though the activity is pointless and irrational, because they’ve been subjected to a specific schedule of reinforcement–a “variable” schedule, where the reward is not provided every time, but only every so often, leaving just enough “connection” between the behavior and the reward to forge a link between the two in the brain and psychology of the subject.

Skinner showed that in order for the pigeon to maintain the pecking and turning behaviors, it doesn’t need to get the reward every time that those behaviors occur.  It just needs to get the reward every so often–that will be enough to keep the pigeon engaging in the behaviors on an ongoing basis.  Skinner noted that the same was true about gamblers. Gamblers don’t need to win every time, they just need to win every so often.  A grandiose victory–a jackpot–that occurs every so often is more than enough to imbue them with inspiring thoughts of winning, and an associated appetite to get in and play.  It is the business of a casino to optimize the schedule at which gamblers win, so that they win just enough to sense that victory is within their reach, just enough to feel the associated thrill and excitement each time they turn the lever.  An efficient casino operation will not afford gamblers any more victories than that–certainly not enough for them to actually make money on a net basis, which would represent the casino’s net loss.

The process through which the gambler is conditioned to gamble is obviously not as simple as the process through which the pigeon is conditioned to peck.  For the human being, there is the complex and vivid mediation of thought, memory, emotion, impulse, and the internal struggle that arises when these mental states push on each other in conflicting ways.  But the fact remains that the reinforcement of winning is ultimately what gives rise to the appetite to play, the psychological pull to engage in the behavior again.  If you were to completely take that reinforcement away, the appetite and pull would eventually disappear, go extinct–at least in normal, mentally healthy human beings.  If casinos were designed so that no one ever won anything, no one ever experienced the thrill and excitement of winning, then no one would ever bother with the activity.  Casinos would not have any patrons.

Skinner’s insights here have a clear application to the understanding of stock market behavior, an application that we’re going to examine more closely in the next piece.  To get a sense of the application, ask yourself: what, more than anything else, gives investors the confidence and appetite to invest in risky asset classes such as equities?  Answer: the experience of actually investing in them, and being rewarded for it, consistently.  Sure, you can tell people the many reasons why they should invest in equities–that’s all wonderful.  But to them, it’s just verbiage, someone’s personal opinion.  In itself, it’s not inspiring. What’s inspiring is the actual experiencing of taking the risk, and winning–making money, on a consistent basis.  Then, you come to trust the process, believe in it, viscerally.  You develop an appetite for more.  As many of us know from our own mistakes, the experience can be quite dangerous–enough to make clueless novices think they are seasoned experts. 

On the flip side, what, more than anything else, causes investors to become averse to investing in risky asset classes such as equities?  Again, the experience of actually investing in them, and getting badly hurt.  A dark cloud of danger and guilt will then get attached to the activity.  The investor won’t want to even think about going back to it for another try–at least not until sufficient time has passed for extinction to occur.  This is operant conditioning in practice.  

The concepts of Classical Conditioning, Operant Conditioning, Extinction, Generalization, Discrimination, and many other concepts that Skinner researched have a role in producing the various trends and patterns that we see play out in markets.  Understanding these processes won’t give us a crystal ball to use in predicting the market’s future, but it can help us better understand, and more quickly respond to, some of the changes that happen as economic and market cycles play out.

Operant Conditioning: Observations Relevant to Traders, Investors, and Economic Policymakers

In this final section, I’m going to go over some unique observations that Skinner made in the course of his research that are relevant to traders, investors, and economic policymakers.  The observation for which Skinner is probably most famous is the observation that reinforcement is a more effective technique for producing a desired behavior than punishment.  We want the pigeon turn.  We saw that giving it a reward–food–works marvelously to produce that behavior.  But now imagine that we were to try to use punishment to generate the behavior.  Suppose that we were to electrically shock the pigeon whenever it spent more than, say, a minute without turning.  Would the shocks cause the pigeon to turn?  No–at least not efficiently.

Instead of turning, as we want it to, the pigeon would continue to do whatever is natural to it, moving in whatever direction it feels an impulse to move in.  When shocks come in, it would simply try to avoid and escape from them.  It would tense up, flinch, flail around, flee, whatever it can do.  Importantly, it wouldn’t have anything to send it towards the desired behavior, and build a specific appetite for that behavior.  Punishment doesn’t create appetite; it creates fear.  Fear of doing something other than the desired behavior does not imply appetite to do the desired behavior.

Skinner was adamant in extending this insight to the human case.  Punishment–the imposition of painful consequences–cannot efficiently get a person to engage in a wanted behavior.  It is not effective at creating the internal drive and motivation that the person needs in order to whole-heartedly perform the behavior.  To the extent that the person does perform the behavior in response to the threat of punishment, the behavior will be awkward, unnatural, artificial, done out of duress, rather than out of genuine desire. Instead of cooperating, the individual will try to come up with ways to avoid the punishment–whatever it needs to do to get to a place where it can do what it actually wants to do, without suffering negative consequences.

Imagine that you are my overweight child.  I’m trying to get you to exercise.  Sure, if I threaten you with a painful punishment for not exercising, you might go exercise.  But your heart isn’t going to be in the activity.  You’re going to go through the motions half-assed, doing the absolute bare minimum to keep me off your back.  Ultimately, if you really don’t want to exercise, you’re going to try to get around my imposition–by faking, hiding, creating distractions, buying time, pleading, whatever.  You’re trapped in a situation where none of your options are perceived to be good.  Rather than accept the lesser evil, you’re going to try to find a way out.

To motivate you to exercise, the answer is not to punish you for not exercising, but to try to get you to see and experience the benefits of exercising for yourself, to try to put you on a positive trajectory, where you exercise, you make progress in losing weight, you end up looking and feeling better, and that reward gives you motivation to continue to exercise regularly.  If that’s not possible, then the answer is to provide you with other rewards that register in your value system–money, free time, whatever.  When people engage in an activity, and make progress towards their goals and values–whether related to the activity, or not–the progress becomes a source of strength, momentum, optimism, hope.  It sows the seeds for further progress.

Skinner’s observation here is particularly relevant to the debate on how best to stimulate a depressed economy–whether to use expansive fiscal policy or expansive monetary policy, a debate that I’m going to elaborate on in a subsequent piece.  Expansive fiscal policy is a motivating, reward-oriented stimulus–it motivates investors and corporations to invest in the real economy by directly creating demand and the opportunity for profit. Expansive monetary policy–to include the imposition of negative real and especially negative nominal interest rates–is a repressive, punishment-oriented, stimulus.  It tries to motivate investors and corporations to invest in the real economy by taking away their wealth if they don’t.

Do investors and corporations acquiesce to the punishment?  No, they try to find ways around it–recycling capital through buybacks and acquisitions, levering up safe assets, reaching for yield on the risk curve, and engaging in other economically-dubious behaviors designed to allow them to generate a return without requiring them to do what they don’t want to do–tie up new money in an environment that they don’t have confidence in. Reasonable people can disagree on the extent to which the repressive policies that provoke these behaviors are financially destabilizing, but it’s becoming more and more clear that they aren’t effective at achieving their policy goals.  They don’t work.  Skinner most definitely would have recommended against them, at least in scenarios where a powerful, reward-oriented stimulus–e.g., expansive fiscal policy–was available.

Another important observation that Skinner made, this one particularly relevant to human beings, pertains to relationship between operant conditioning and rules.  Rules are ways that we efficiently codify behavioral lessons into language, to allow for easy transmission to others.  If I’m trying to teach you how to do something, I might give you a rule for how to do it.  You will then follow the rule–put it into practice.  Crucially, positive consequences will need to follow from your implementation of the rule–you will need to see the rule work in your own practice.  Without that reinforcement, continued adherence to the rule will become increasingly difficult.

As human beings, there’s nothing that we hate more than rules imposed on us that we don’t understand, and that we’ve never seen work.  Do X, don’t do Y, but do Z, but not if you’ve already done Q–and so on.  We might be able to gather the strength to follow through on these complex instructions, but unless we start seeing benefits, results, we’re not going to be able to maintain our adherence.

Our aversion to following rules that have not yet been operantly conditioned, i.e., tied in our minds to beneficial consequences, is the reason why we often prefer to shoot from the hip when doing things, as opposed to doing them by executing externally-provided instructions.  Take the example of a child that receives a new toy for Christmas that requires assembly.  The last thing the child will want to do is bust out the users manual, go to page one, and execute the complicated assembly instructions.  To the contrary, the child is going to want to try to put the toy together on her own–“no, let me do it!”–without the external burden of rules.  And children aren’t unique in that respect–we adults are the same way.  We prefer to come to solutions not by obediently carrying out other people’s orders, but by engaging in our own curious experimentation, allowing the observable consequences of our maneuvers–this worked, try it over there, that didn’t work, try something else–to naturally guide us to the right answers.

One of the reasons why investing is fun, in my opinion, is that you don’t need to follow any rules to do it.  You can wing it, go in and buy whatever you like, based on whatever your gut tells you to do–and still do well, sometimes just as well as seasoned professionals. This facet of the activity makes it uniquely enjoyable and entertaining, in contrast with activities where success requires tedious adherence to externally-imposed rules and constraints.

A final important observation that Skinner made, this one only relevant to humans and certain higher mammals, pertains to language and thought.  Skinner viewed language and thought as behaviors that are formed, in part, through conditioning–both Pavlovian and Operant.  From a Pavlovian perspective, linguistic connections between words and meanings are formed through exposure to repeated associations.  From an operant perspective, what we think and say is followed by consequences.  Those consequences condition what we think and say going forward.

How does an infant baby connect the oral sound “Daddy” to the man who just walked into the room?  He connects the two because Mommy says those words whenever Daddy walks in.  How does he learn to say them himself?  The answer, at least in part, is through a process of reinforcement.  Whenever he says “Da da” in response to seeing Daddy, everyone in the room turns their attention to him and expresses endearment and approval–“Oh, that’s so cute, Billy!  Say it again… say it again for Daddy!”  Though barely out of the womb, the organism is already able to “select” beneficial behavior from among different possibilities so as to perform it more frequently.

We might think that the influence of operant conditioning on our thinking and our speaking–or, to use Skinner’s preferred term for these activities, on our “verbal behavior”–ends in youth. It most certainly does not.  The influence continues throughout our lives, shaping us in subtle ways that we often fail to notice.  The emotional reactions that occur inside us, when we think and say things, and that occur outside us, in the form of the approval and disapproval of other members of our verbal communities, have a strong influence on where our thought processes and the statements that express them end up going.

Unfortunately, the internal and external contingencies that shape how we think and speak often aren’t truth-oriented.  They’re often oriented towards other values–the building and maintaining of positive relationships, the securing of desired resources, the demonstration of status, the achievement of resolution, and so on.  In many contexts, this lack of truth-orientation isn’t a problem, because there aren’t actual tangible harms associated with thinking and saying things that aren’t true.  “Do I look fat in this dress?” — “No, you look great honey, <cough>, <cough>.”  The world obviously isn’t going to end if a husband says that to his wife.

But in the arena of finance–at least the part of the arena behind the curtain, where actual financial decisions are made–the fact that our thinking and speaking can be shaped by factors unrelated to truth, or worse, factors opposed to truth, is a huge problem.  It’s a huge problem because there are actual, tangible consequences to being wrong.

Given that problem, we need to be vigilant about truth when making investment decisions. We need to routinely check to ensure that we’re thinking what we’re thinking, and saying what we’re saying, because we genuinely believe it to be true, or likely to be true, not because we’ve been conditioned to think it or say it by the effects of various hidden reinforcers.  We want our thoughts and statements to represent an honest description of reality, as we see it, and not devolve into ulterior mechanisms through which we to try to look and sound a part, or earn status and credibility, or win approval and admiration, or acquire power in organization, or make peace out of conflict, or secure the satisfaction of “being right”, or crush enemies and opponents, or smooth over past mistakes, or relish in the pride of having discovered something important, or preserve a sacred idea or worldview, and so on.  These hidden contingencies, to the extent that they are allowed to creep into the financial decision-making process and shape our verbal behaviors, can be costly.

Posted in Uncategorized | Comments Off on B.F. Skinner and Operant Conditioning: A Primer for Traders, Investors, and Economic Policymakers

Beer before Steel: Ranking 30 Industries by Fundamental Equity Performance, 1933 to 2015

steelbfbeerFrom January of 1933 through July of this year, beer companies produced a real, inflation-adjusted total return of 10% per year. Steel companies, in contrast, produced a return of 5%.  This difference in performance, spread out over 82 years, is enormous–the difference between a $1,000 investment that turns into $26,000,000 in real terms, and a $1,000 investment that turns into $57,000.

Given the extreme difference in past performance, we might think that it would be a good idea to overweight beer stocks in our portfolios and underweight steel stocks.  Whether it would actually be a good idea would depend on the underlying reason for the performance difference. Two very different reasons are possible:

(1) Differences in Return on Investment (ROI): Beer companies might be better businesses than steel companies, with higher ROIs.  Wealth invested and reinvested in them might grow faster over time, and might be impaired and destroyed less frequently.

(2) Change in Valuation: Beer companies might have been cheap in 1933, and might be expensive in 2015.  Steel companies, in contrast, might have been expensive in 1933, and might be cheap in 2015.

If the reason for the historical performance difference is (1) Difference in ROI, and if we expect the difference to persist into the future, then we obviously want to overweight beer companies and underweight steel companies–assuming, of course, that they trade near the same valuations.  But if the reason is (2) Change in Valuation, then we want to do the opposite.

The distinction between (1) and (2) speaks to an important challenge in investing.  Asset returns tend to mean-revert.  We therefore want to own assets that have underperformed, all else equal.  Assets that have underperformed have more “room to run”, and will tend to generate stronger subsequent returns than favored assets that have already had their day. But, in seeking out assets that have underperformed, we need to distinguish between underperformance that is likely to continue into the future, and underperformance that is likely to reverse, i.e., revert to the mean.  That distinction is not always an easy distinction to make.  Making it requires distinguishing, in part, between (1) underperformance that’s driven by poor ROI–structural lack of profitability in the underlying business or industry, and (2) underperformance that’s driven by negative sentiment and the associated imposition of a low valuation.  The latter is likely to be followed by a reversion to the mean; the former is not.

In this piece, I’m going to share charts of the fundamental equity performances of different U.S. industries, starting in January 1933 and ending in July of 2015.  I put the charts together earlier today in an effort to ascertain the extent to which differences in the historical performances of different industries have been driven by factors that are structural to the industries themselves, rather than cyclical coincidences associated with the choice of starting and ending dates–the possibility, for example, that beer stocks were in a bear market in 1933, with severely depressed valuation, and are now in a bull market, with elevated valuation, where the change in valuation, and not any underlying strength in beer-making as a business, explains the strong performance.

The charts are built using data from the publically-available CRSP library of Dr. Kenneth French.  The only variables available back to that date are price and dividend–but they are all that are needed to do the analysis. Dividends are the original, true equity fundamental.

The benefit to using dividends as a fundamental is that they are concrete and unambiguous.  “What was actually paid out?” is a much easier question to accurately answer than the question “What was actually earned?” or “What is the book actually worth?”  There are no accounting differences across different industries and different periods of history that we have to work through to get to an answer.  The disadvantage to using dividends is that dividend payout ratios have fallen over time.  A greater portion of current corporate cash flow is recycled into the business than in the past, chiefly in the form of share buybacks and acquisitions that show up in increased per share growth.  For this reason, when we approximate growth using dividend growth, we end up underestimating the true growth of recent periods.  But that’s not a problem.  The underestimation will hit all industries, preserving the potential for comparison between them.  If we want a fully accurate picture of the fundamental return, we can get one by mentally bumping up the annualized numbers by around 100 basis points or so.

The first task in the project is to find a way to takes changes in valuation out of the return picture.  We do that by building a total return index whose growth is limited to the two fundamental components of total return–growth in fundamentals (in this case, dividends–we use the 5 year smoothed average of monthly ttm dividends), and growth from reinvested dividend income.

We start by setting the index at 1.000 in January of 1933.  The smoothed dividends might have grown by 0.5% from January of 1933 to February of 1933, and the dividend yield for the month might have been 0.1%.  If that was the case, we would increase the index from January to February by 0.6%–the sum of the two.  The index entry for February would then be 1 * 1.006% = 1.006.  We calculate the index value for each month out to July of 2015 in this way, summing the growth contribution and the reinvested dividend income contribution together, and growing the index by the combined amount.  What we end up with is an index that has only fundamental total return in it–return due to growth and dividend income.  Any contribution that a change in valuation from start to finish might have made will end up removed.  (Of course, contributions from interim changes in valuation, which affect the rate of return at which dividends are reinvested, will not be removed.  Removing them requires making a judgement about “fair value”, so as to reinvest the dividends at that value.  That’s a difficult judgment to make across different industries and time periods when you only have dividend yields to work with as a valuation metric.  So we reinvest at market prices).

Unfortunately, we face a potentially confounding variable in the cyclicality of dividends, a cyclicality that smoothing cannot fully eliminate.  If smoothed dividends were at a cyclical trough in 1933, and are at a cyclical peak now, our chart will show strong fundamental growth.  But that growth will not be the kind of growth we’re looking for, growth indicative of a structurally superior ROI.  It will instead be an artifact of the place in the profit cycle of the industry in question that 1933 and 2015 happened to coincide with.

There’s no good way to remove the influence of this potential confounder.  The best we can do is to make an effort to assess the performance not based on one chosen starting point, but based on many.  So, even though the return for the period is quoted as a single number for a single period, 1933 to 2015, it’s a good idea to visually look at how the index grew between different points inside that range.  How did it grow 1945 to 1970?  From 1977 to 1990?  From 2000 to 2010?  If the strong performance of the industry in question is the result of a structurally elevated ROI–sustained high profitability in the underlying business–then we should see something resembling consistently strong performance across most or all dates.

Not to spoil the show, but we’re actually going to see that in industries like liquor and tobacco, which we suspect to be superior businesses with structurally higher ROIs.  In junky industries like steel and mining, however, what we’re going to see is crash and boom, crash and boom.  Periods of strong growth in those industries only seem to emerge from the rubble of large prior losses, leaving long-term shareholders who stick around for both with a subpar net gain.

The following legend clarifies the definition of each industry term.  There are 30 in total.

legend2

To the charts.  The following slideshow ranks each industry by fundamental return, starting with #30, and ending with #1.  All charts and numbers are real, inflation-adjusted to July of 2015.  Note that you can hit pause, and then move from slide to slide at your own pace:

The following table shows the industries and real fundamental annual total returns from 1933 to 2015 together, ranked from low to high:

ranking

To be clear, the charts and tables tell us which industries performed well from 1933 to 2015. They don’t tell us which industries will perform well from 2015 into the future.  Beer might have been a consistently great businesses over the last century, steel might have been a consistently weak business.  But it doesn’t follow that the businesses are going to exhibit the same fundamental performances over the next century–conditions can change in relevant ways.  And if we believe that the businesses are going to generate the same fundamental performances that they generated in the past, it doesn’t necessarily follow that we should overweight or underweight them.  The relative weighting that we assign them should depend on the extent to which their valuations already reflect the expected performance divergence.

Posted in Uncategorized | Comments Off on Beer before Steel: Ranking 30 Industries by Fundamental Equity Performance, 1933 to 2015

Thoughts on Negative Interest Rates

The big surprise from Thursday’s Fed announcement was not the decision to hold interest rates at zero, which most Fed observers expected, but the revelation that an unidentified FOMC member–probably Narayana Kocherlakota, but possibly another dove–is now advocating the use of negative nominal interest rates as a policy tool:typo2

What follows is a simplified explanation of how a policy of negative interest rates would work.  The central bank would begin by enacting a large scale program of “quantitative easing” that would entail the creation of new money and the use that new money to buy assets from the private sector.  The new money would end up on deposit at banks, where it would represent an excess cash reserve held physically in vaults or electronically on deposit at the central bank.  The central bank would continue the program until the quantity of excess reserves in the banking system was very high.  Indeed, the higher the quantity of those reserves, the more powerful–or rather, the more punitive–a policy of negative interest rates would end up being.

After saturating the system with excess reserves, the central bank would require individual banks that hold excess reserves to pay interest on them–effectively, a tax.  Individual banks would then have three choices:

(1) Eat the associated expense, i.e., take it as a hit to profit,

(2) Pass the associated expense on to depositors, creating the equivalent of a negative interest rate on customer deposits, or

(3) Increase lending (or purchase assets), so that the excess reserves cease to be “excess”, but instead become “required” by the increased quantity of liabilities that the bank will bear.

This third option is important and confusing, so I’m going to spend some time elaborating on it.  Recall that the issuance of loans and the purchase of assets by banks create new deposits, which are bank liabilities.  Required reserves are calculated as a percentage of (certain types of) those liabilities.  Importantly, required reserves do not incur interest under the policy (at least as the policy is currently being implemented in Europe), and so increases in lending, which increase the quantity of reserves that get classified as “required” as opposed to “excess”, represent a way to avoid the cost, both for banks individually, and for the banking system in aggregate.

The following schematic illustrates with an example:  bf1

We have an individual bank–we’ll call it American Bank.  This bank begins with $110 in assets, $50 of which are cash reserves, and $100 in deposit liabilities, all of which are subject to the reserve requirement.  If we assume that the reserve requirement is 10% of deposit liabilities, then the bank will be required to hold $100 * 10% = $10 in reserves. But, in this case, American bank is holding $50 in reserves–$40 more than it is required to hold.  In a negative interest rate regime, it will have to pay interest, or tax, on that excess.  So if the annual interest rate is -2%, it will have to pay $40 * .02 = 80 cents (8% of its $10 in capital, so no small amount!).  The payment will go to the central bank, or to the treasury, or whoever.

Now, let’s suppose that American Bank issues $400 in a new loan.  The way it would actually make this loan would be to simply create (from essentially nothing) a new deposit account for the borrower, with a $400 balance in it, that the borrower can draw from on demand.  Let’s suppose that the borrower keeps the $400 in that same account, i.e., doesn’t withdraw it as physical cash or move it to another bank by transferring it or writing a check on it that gets cashed elsewhere.  It follows that the bank will not have to come up with any actual money to fund the loan.  The money that is funding the loan is the money that the loan created, which has stayed inside the bank.  Only when that money leaves the bank does the bank have to “come up with it.”

After the loan is made, American Bank will have $510 in assets (the $110 in previous assets plus the $400 in new loans, that the borrower owes it), and $500 in liabilities (the $100 in previous deposits plus the new $400 that the borrower is holding on deposit with it).  The required reserves will be $500 * 10% = $50, which is exactly the amount of cash that it has on hand.  So it’s excess reserves will be $0 and it will not have to pay any interest or tax. Problem solved.

Now, suppose that the borrower decides to move the $400 that it has on deposit at American Bank to some other bank. But American Bank only has $50 in actual cash reserves to move.  And those reserves are needed to meet the $50 in required reserves, so they can’t be moved.  How, then, will the bank satisfy the borrower’s demand to send $400 to another bank?  Easy–it will simply go into the Fed Funds market, borrow $400 from a bank that has excess funds to lend, and transfer the funds to the bank that the borrower wants them transferred to.  Its $400 liability to the borrower will then disappear, to be replaced by a $400 liability to the bank from which it borrowed.

Of course, nothing will actually physically move in this process.  The transfer will occur electronically, at the Fed, through the adjustment of the deposit balances of the involved banks–essentially, a spreadsheet operation.  The bank that is lending $400 to American Bank will see its deposit balance at the Fed fall by $400, American Bank will see no change to its deposit balance, and the bank that is receiving the $400, which is the bank that the borrower is moving the money to, will see its deposit balance increase by $400.

But what if other banks lack sufficient funds, over and above the amount that they themselves have to hold to meet reserve requirements, to lend to American Bank?  That won’t happen.  The Fed uses asset purchases and asset sales to manipulate the quantity of funds in the banking system, over and above those that are needed to meet reserve requirements, so that there are always sufficient excess funds available to be lent, at the Fed Funds rate, the Fed’s target short-term rate.  When the Fed wants that rate to be higher, it uses asset sales, which take funds (money) out of the system and put assets (e.g., bonds) in, to make the supply of excess funds available to be lent tighter; if it wants that rate to be lower, it uses asset purchases, which take assets (e.g., bonds) out of the system and put new funds (money) in, to make the supply of excess funds available to be lent more plentiful.  Importantly, the targeted variable in the Fed Funds market is not the supply of excess funds available to be lent, but the price.  The Fed, through its operations, effectively guarantees that there will always be a sufficient supply of excess funds available to be lent at the target price–though it may be an expensive price, if the Fed wants less lending to take place.

But what if other banks refuse to lend to American Bank, even though there are excess funds are available to be lent?  Again, not a problem.  If the bank can prove that it is worthy of a loan, then it can borrow directly from the Fed, through the discount window, at a price slightly higher than the price targeted in the Fed Funds market.

To summarize, if the borrower moves his deposit to another bank, the picture changes to look like this:

bf2

As you can see, nothing really changes when the borrower moves the money except the composition of the bank’s deposit liabilities.  Previously, they were liabilities to an individual, now they are liabilities to other banks, or to the Fed.  They are still subject to the reserve requirement, and so the bank’s reserve excess remains zero.

It’s important to understand that when banks implement this third option, the cash reserves that were “excess” continue to exist as reserves–as physical cash held in storage or as balances held on deposit at the central bank.  Only the central bank can (legally) create or destroy them, which it does through its open market operations–its purchase and sale of assets from the private sector.  Because the reserves still exist, they still have to be held by some person or entity.  Unless customers extract them and hold them physically as metal coins and paper bills, the banking system in aggregate–some bank somewhere–continues to hold them.

The point, however, is that they no longer get classified as excess reserves.  They become required reserves, required by the larger quantity of deposit liabilities that the banking system ends up bearing.  Because required reserves do not incur interest expense under the policy, an increase in aggregate bank lending, which will increase the quantity of reserves that get classified as “required” as opposed to “excess”, represents a potential way for banks to avoid the cost–both individually, and in aggregate.

Returning to the question of how banks would respond, there’s obviously a limit to the cost increase that they will be willing to absorb.  If the negative rate is “negative” enough, they will try to pass that increase on to their customers, charging interest to those who hold deposits with them and who cause them to have unneeded excess reserves in the first place.  Proponents of the policy believe that this outcome would stimulate the economy by increasing the velocity of money.  Bank deposits would become an item that everyone wants to get rid off, but that someone has to hold.  They would get tossed around like a hot potato, moving from individual to individual, in the form of increased spending, trading and investing.

Of course, spending, trading and investing aren’t the only ways to get rid of a bank deposit. A depositor can take physical delivery of the money, and put it into her own storage, outside of the banking system–a piggy bank, a mattress, a safe, wherever.  The assumption is that this type of maneuver would be inconvenient and therefore rarely used.  If the assumption were to be proven wrong, the next step would be to eliminate physical money altogether, so that all cash ends up trapped inside the banking system, with the owners forced to pay interest on it for as long as they choose to continue to hold it.

With respect to the third option, which is to increase lending, banks aren’t always able to increase their lending in the normal ways–they need credible borrowing demand from borrowers.  That demand isn’t guaranteed to be there.  To generate it, however, they can offer to pay borrowers to borrow–funding the payments with the income that they generate from charging their depositors interest.  The negative interest rate regime will then come full circle, in a perverse reversal of the normal banking arrangement–instead of borrowers paying depositors to borrow their money, with the banking system acting as an intermediary, depositors will be paying borrowers to borrow their money, with the banking system again acting as an intermediary.

In assessing the third option, we can’t forget the impact of regulatory capital ratios, which can quickly become limiting.  In our example, American Bank had $50 in cash reserves, which carry a risk-weighting of 0%, $10 in bonds which we assume are government bonds  that also carry a risk-weighting of 0%, and $50 in retail loans, which carry a risk-weighting of 100%.  Simplistically, the bank has $10 in capital, so the bank’s capital ratio would be $10 / (0% * $50 + 0% * $10 + 100% * $50) = $10 / $50 = 20%, well above the 8% Basel requirement.   After the new $400 loan, however, the bank’s capital ratio would fall to $10 / (0% * $50 + 0% * $10 + 100% * $450) = $10 / $450 = 2%, which is well below the 8% Basel requirement.  So American Bank would not actually be able to do what I just proposed.

Given limits to regulatory capital ratios, the only way for banks to use the third option to substantially increase required reserves and reduce interest expense would be to buy assets that don’t carry a risk-weighting–banks would either have to do that, or raise capital, which no bank is going to want to do.  It follows that the types of interest rates that would see the biggest relative drop under a regime of substantially negative interest rates would not be the risk-bearing interest rates that real economic participants pay, but the risk-free interest rates that governments and other risk-free borrowers pay.  Those securities would get gobbled up by the banking system, pushed to rates as negative as the negative rates on excess reserves.

Proponents of the policy assume that if banks choose to respond by increasing their lending to the private sector, that the increase will necessarily be stimulative to economic activity.  That may be true, but not necessarily.  It’s possible that banks could issue zero interest rate loans to highly creditworthy private sector borrowers who don’t want or need the money, and who have no plans to spend or invest it, but who agree to take and hold the loans in exchange for other perks–for example, a waiving of interest and fees on other deposits being held.  Such loans–even though they wouldn’t be doing anything economically–would increase the bank’s deposit liabilities and required reserves, and therefore decrease the portion of its reserves that get classified as “excess”, eliminating the associated interest expense.  Of course, these loans, even though safe, would carry regulatory risk, and so the ability of the banking system to engage in them would be limited by regulatory capital.

Earlier, we noted that it’s important for the central bank to inject excess reserves into the system through quantitative easing prior to implementing a policy of negative interest rates.  The reason is obvious.  On the assumption that lending stays constant, every excess reserve in the system is going to have to be held, and paid interest on, by some bank, and ultimately, by some depositor, the person who actually owns the money, and who is holding it on deposit at the bank.  Quantitative easing increases the quantity of bank deposits and excess reserves in the system.  It therefore increases the number of assets in the system that are directly subject to negative rates, and that incur the obligation to pay those rates.

To explain with an example, if the private sector’s asset portfolio consists of $10 in cash and $1000 in fixed income assets, and the Fed imposes a negative interest rate, that rate will only directly hit $10.  But if the Fed goes in and buys 100% of the fixed income assets, swapping them for newly issued money, such that the private sector’s asset portfolio shifts to $1010 of cash and $0 in fixed income assets, and if it then imposes a negative interest rate, that rate will directly hit all $1010–the private sector’s entire asset portfolio.  It will cause that much more pain, and will therefore have that much more of an effect.

Of course, this effect would come at a cost.  Psychologically, there’s a big difference between not making money, as inflation slowly erodes its value, and outright losing it–particularly meaningful amounts of it.  People tend to suffer much more at the latter, and are therefore likely to go to far greater lengths to avoid it.  The result of the policy, then, would not be an increase in what economies at the zero-lower-bound need–well-planned, productive, useful, job-creative investment–but rather panicky, rushed, impulsive financial speculation that leads to asset bubbles and the misallocation of capital, with detrimental long-term consequences on both output and well-being.

Worse, the policy is likely to be deflationary, not inflationary.  Like any tax, it destroys financial wealth–the financial wealth of the people that have to pay it.  That wealth is taken out of the system.  Granted, the wealth can be reintroduced into the system if the government that receives it resolves to take it and spend it.  But in the instances of unconventional monetary policy that have played out so far–quantitative easing globally and negative interest rates in Europe–that hasn’t happened.  Governments have pocketed the income from these programs, sending it into the financial “black hole” of deficit reduction.

Even if the wealth is reinjected into the system in the form of tax cuts or increased government spending elsewhere, we have to consider the behavioral effects on those that rely, at least in part, on returns on accumulated savings to fund their expenditures.  Those individuals–typically older people–represent a growing percentage of western society. Under conventional policy, they simply have to deal with low interest rates on their savings–tough, but manageable.  To require them to deal with negative interest rates–confiscation of a certain percentage of their savings as each year passes–would be a significant paradigm shift.  Their confidence in their ability to fund their futures–their future spending–would likely fall.  They would therefore spend less, not more, exacerbating the economic weakness.  Granted, the threat of punishment for holding risk-free assets might coax them into speculating  in the risky financial bubbles that will have formed–but then again, it might not.  If it does, they will suffer on the other end.

Hopefully at this point, the reader intuitively recognizes that imposing meaningfully negative interest rates on the population is a truly terrible idea.  If we’re only talking about a few basis points, a sort of “token” tiny negative rate that is put in place for optics, as has been done in Europe–fine, people will grow accustomed to it and eventually ignore it.  But a serious use of negative rates, that involves the imposition of levels meaningfully below zero–e.g., -2%, -3%, -4%, and so on–would be awful for the economy, and for people more generally.

The problem of how to stimulate a demand-deficient economy is fundamentally a behavioral problem.  It needs to be evaluated from a behavioral perspective.  We have to ask ourselves, what specific behavior do we want to encourage? We know the answer. We want corporations and the entrepreneurial class to invest in the production of useful, wanted things.  Their investment creates jobs, which produce incomes for working people, incomes that can then be used to purchase those useful, wanted things, completing a virtuous cycle in which everyone benefits and prospers.

The question is, if corporations and the entrepreneurial class aren’t doing enough of that, how do we get them to do more of it?  The answer, which I’m going to elaborate on in the next piece, is not by punishing them with a highly repressive monetary policy, a policy that goes so far as to confiscate their money unless they hand it off to someone else.  Rather, the answer is to put the economy in a condition that causes them to become confident that if they invest in the production of useful, wanted things, that they will receive the due reward, profit.  In an economy like ours, a more-or-less structurally sound economy that happens to suffer from deficient aggregate demand associated with legacy private sector debt and wealth inequality issues, the way to do that is with fiscal policy.

Posted in Uncategorized | Comments Off on Thoughts on Negative Interest Rates

The World’s Best Investment For the Next 12 Months

Suppose that you’ve been given $1,000,000 of cash in an IRA to manage.  Your task is to invest it so as to generate the best possible risk-adjusted return over the next 12 months. You don’t have to invest it immediately–you can hold it and wait for a better entry point. And you don’t have to commit to what you buy–you can trade.  However, you can only have the money invested in one asset class at a time–all of it.  Diversification across asset classes is not allowed.

(Note: obviously, in real life, you would never put an entire portfolio in one asset class–but I want to use the example to zero in on the most attractive asset class in the market right now, which we would ideally be comfortable putting all of our eggs in.)

To be clear, your client is a high-earning corporate executive in her late 40s, who doesn’t plan to draw from the money for at least another decade.  But 12 months is the timeframe on which she’s going to evaluate your performance.  Importantly, her assessment isn’t only going to be about the return that you earn for her, but also about the psychological stress that you put her through with your decisions.  She can handle paper losses, but only if she can be reasonably sure that they will eventually be recovered.  She will know if you’re winging it, investing based on poorly-researched ideas that you don’t have a basis for being confident in.  So you need to find a solution that’s genuinely compelling, with the uncertainties honestly considered and the downside risks appropriately addressed, that she can feel comfortable with.

What asset class would you choose?  Would you buy now, or would you wait?  In what follows, I’m going to carefully weigh the options.  I’m then going to propose a solution–what I consider to be the world’s best investment for the next 12 months.

The Fixed Income Space

The table below shows the relevant portions of the current fixed income menu.  As you can see, the meal comes light on yield, and heavy on duration risk:

fimenu

To eliminate the possibility of a loss over the time horizon, you could invest the money in a 1 year treasury yielding 36 bps.  But what would the point be?  The income earned would amount to a rounding error that wouldn’t even cover your fees.  If short rates were to back up more quickly than expected, and you wanted to sell, you could lose more than your upside, even after factoring in the income.

To improve the return, you could invest in longer-dated treasuries or investment-grade corporate bonds.  But it’s difficult to make a compelling case for those choices right now, given the low returns that they’re priced to offer.  You may be cautious on the global economy, but you can’t dismiss the possibility that the U.S. expansion will pick up steam over the next 12 months, led by continued employment strength and the release of pent-up demand in the housing market. The current complacency in fixed income, characterized by a growing confidence that inflation is dead and that long-term interest rates will stay anchored at low levels, regardless of what the Fed does, could come apart.  If that were to happen, treasuries and investment-grade corporates of all maturities would produce negative returns–with the long-end inflicting substantial losses (see the last column, the expected loss on a +100 bps change in rates). Does the potential return that you might earn in a neutral or bearish economic scenario–say, a 2% coupon plus a small gain from roll and possibly from falling rates–adequately compensate for that risk?  No.

To further boost the return, you could take on additional credit risk, venturing into the high-yield corporate bond space.  The yields in that space are attractive, but the credit losses are non-trivial.  Default rates have been low over the present cycle, but if they revert to historical norms (a reasonable assumption, given the present trend), the returns will have been no more attractive than the minimal returns on offer in treasuries.

hy

Yields on speculative-grade corporate debt have been pushed to record lows in the current cycle, even as underwriting standards have deteriorated.  They’ve since risen in response to distress in the energy sector, but they haven’t risen by all that much.  And the rise has been entirely justified, given the increased likelihood of energy-related defaults.  If those defaults play out in force, the impact will be felt not only in the energy sector, but across the entire credit market, as conditions tighten.

The current credit cycle is growing increasingly mature.  We need to be thinking about it in terms of the losses that we might incur when it turns.  Thinking about corporate high-yield in that way, the space is not attractive right now.  A better opportunity, it seems, would be to invest in credit risk tied to the U.S. household sector, the lone bright spot of the global economy. Composite U.S. consumer default rates, which include defaults on first and second mortgages, are at their lowest levels in over 10 years:

spxdef

What makes the U.S. household sector especially attractive is that it hasn’t developed any excesses that would be likely to produce a deterioration in credit conditions going forward. The last several years have been a process of clearing out the excesses of the prior decade. What has emerged is an increasingly healthy market, with substantial pent-up demand to act as a tailwind.

In 2005 and 2006, default rates were low, but the strength was an illusion brought about by the bubble in the prices of homes, the assets backing the mortgages.  When the bubble finally burst, the credit support of the inflated collateral was removed, and default rates jumped.  Right now, there’s no bubble in the housing market.  Homes across the country are fairly priced and have room for further appreciation.  The observed credit strength is therefore likely to be significantly more durable and reliable.

Unfortunately, right now, it’s hard to translate U.S. household strength into attractive fixed income investment opportunities.  Agency residential mortgage backed securities (RMBS) do not offer the desired credit risk exposure, and are not priced any more attractively than treasuries.  Non-agency RMBS do offer the desired credit risk exposure, and probably represent the most attractive available credit opportunity.  But the securities have had a huge run over the past few years, and are no longer cheap.

spreads

If the credit cycle soon turns–as it did the last time default rates were this low–non-agency RMBS are unlikely to perform well.  In fact, the simple fear of a coming turn in the credit cycle could be enough to meaningfully attenuate the near-term returns, even if that fear were to be misguided.

One enticing possibility would be to invest in the mortgage REIT (m-REIT) space, which has been absolutely demolished over the past year.  But it’s hard to invest confidently in that space, given the complicated, black-box natures of the various strategies used, and the extreme levels of leverage implemented.  Can the m-REIT space be expected to perform well as the Fed tightens and the yield curve flattens–potentially all the way to inversion? If not, then now may not be a good time to invest in it, at least not on a one year performance horizon.

A more benign possibility would be to invest in leveraged closed-end bond funds.  But like mortgage REITs, all that these funds really offer is the opportunity to buy fixed income assets at discounts to their market values.  The discounts are currently large, but they come at a cost–the prospect that they might grow larger.  If interest rates rise by more than the small amount that the market expects over the next 12 months, or if the credit cycle turns, the discounts probably will grow larger, even as the net asset values drop.

Outside of the U.S., you could invest in dollar-denominated emerging market (EM) debt. But, with the exception of the well-known problem countries–think Venezuela, Argentina, and Ecuador, for example–the yields on that class of debt are not appreciably different from the yields on investment-grade U.S. corporates.  The “aggregate” yield of an index or fund of emerging market bonds may appear attractive, but the attractiveness is driven in large part by the contributions from those countries, which carry meaningful default risk, particularly in the current strained, post-EM-credit-bubble environment of a strengthening dollar, falling commodity prices, and weakening global growth.

Instead of dollar-denominated EM debt, you could invest in local currency EM debt.  But that would require you to step in front of a powerful uptrend in the dollar, an uptrend that probably has more room to run.  Real exchange rates between Emerging Markets and the United States appreciated dramatically from 2003 to 2011.  What fundamental or technical reason is there, at present, to believe that the subsequent unwind, which began 4 years ago, is over?  None.  Emerging markets are the trouble spots of the global economy; the United States is the lone bright spot.  There’s presently no sign that the divergence is about to shift–in fact, it appears set to increase further.  The operating assumption, then, should be for continued dollar appreciation relative to emerging market currencies.  Local yields may be high in real terms, providing a buffer of protection against that appreciation, but as the experience of the last year has demonstrated, currency moves can quickly and easily wipe yields out, leaving dollar-based investors with significant losses.

Mortgage REITs and emerging market debt represent areas where comfort levels have to be taken into consideration.  Will your client be able to tough out an investment in a basket of risky emerging market countries, as headlines forewarn of crisis, and as losses accumulate?  Will she be able to stick with a collection of agency m-REITs levered 7 to 1, that sport frankenstein-like double-digit yields, as they fall in price?  Will she have a basis for remaining confident that the losses she is incurring will be recovered in due course, that you haven’t missed something crucial in your assessment of the risks?  No.  So even though there may be a decent long-term return to be earned in these assets relative to plain-vanilla fixed income alternatives, investing in them is unlikely to work well in the present scenario, all things considered.

The Equity Space

The available fixed income opportunities aren’t particularly attractive, so we turn to the equity space.  Investors in the equity space are currently wrestling with a number of worries:

China.  As China’s credit and investment excesses unwind–a process that is already in motion–will the global economy be able to avert recession?  If not, what will the implications for earnings be, particularly the earnings of multinationals?  Given recent developments in China–a crashing stock market, abysmal economic data, unexpected currency devaluations–have policymakers lost control?  Is China on the verge of a Lehman-like “moment of truth”, a point where conditions in its deeply-flawed, heavily-mismanaged economy finally break, unleashing a crisis?

Falling oil prices.  Oil prices were supposed to have recovered by now.  Yet they continue to bounce around the lows.  If they stay where they are for the long-term, or worse, if they go lower, how severe will the losses for producers and lenders be?  Energy investment has been a significant source of economic stimulus in the current expansion.  As it gets scaled back, what will the effect on employment and growth be? What is going to step in to take the place of that investment, to keep the expansion going?  Could the U.S. economy fall into a mild recession?

The Fed.  For the duration of the current recovery, the Fed has provided consistent support for equity markets.  Every time conditions have deteriorated, the Fed has come out of the woodwork, offering up both words and deeds designed to restore risk appetite.  But that trend changed last fall, when QE3 was drawn to a final close. Equity market hasn’t been the same since.

The Fed’s ultimate concern is the job market, not the stock market.  With the job market showing significant strength over the past year, a strength that wasn’t present in earlier phases of the recovery, the Fed has had more room to step back and let markets fend for themselves.  That’s the approach that the Fed seems to be taking amid the current turmoil.  Will the market be able to deal with it?  Are prices going to have to reset lower?

The Fed seems intent on moving forward with a tightening this year, even as core inflation measures remain well below the 2% target.  What is the Fed thinking?  Why does it want to tighten so badly? Has it grown complacent about its mandate? If the Fed goes through with a rate hike in September or December, the risk is that the tightening will occur just as other headwinds in the global economy strengthen.  Does the Fed appreciate that risk?

Markets don’t usually perform well when central banks start tightening.  That would seem to be especially true for a market that is already six years into gains, up 200% from the lows.  If the Fed is serious about tightening here, even against a lukewarm backdrop, what upside can U.S. equity market participants reasonably expect?  After 7 years of acclimatization to a zero interest rate policy, a policy that has squeezed asset valuations to elevated levels, a progressive departure from that policy is going to represent a powerful headwind to further price appreciation.

What about the unforeseen effects?  As Warren Buffet famously says, “only when the tide goes out do you discover who’s been swimming naked.”  When the Fed finally comes off of zero, the tide will be going out on a substantial amount of ZIRP-related excess–not only domestically, but internationally.  In the summer of 2013, we saw what a mild unwind of such excess can look like.  Are there reasons to be confident that a similar unwind–or something worse–won’t play out again?  The emerging-market concern is salient here, given the recent boom in dollar-denominated EM corporate debt issuance.

Earnings.  Earnings for S&P 500 corporations continue to surprise to the downside. Year end 2015 estimates for S&P operating earnings now sit at $111, when they were $136 exactly one year ago today.  That’s a full 20% drop–but the S&P has only fallen 10%. Is there any reason that it shouldn’t fall another 10%, to catch up?

You can blame the strengthening dollar.  But with the Fed tightening, the BOJ and ECB aggressively easing, and emerging markets endlessly deteriorating, is there any reason to think that the dollar’s uptrend is over?  You can blame lost oil revenues in the energy sector.  But recall that a boost in consumer spending, prompted by falling gas prices, was supposed to offset those losses.  That hasn’t happened. Why not?

Employment numbers have been strong, but the strength hasn’t translated into commensurate growth in output and revenues.  It’s instead been absorbed into falling productivity.  What are the implications of weak productivity for profitability? Are profit margin bears right?  Are we in the early innings of an unwind of the profit margin boom of the last 10 years?

Adding to these worries, valuations, at least in U.S. equities, are unattractive. The current P/E ratio on the S&P 500–17.3 times trailing operating earnings and 19.4 times trailing GAAP earnings–is substantially elevated relative to both recent and long-term averages. That elevation is currently layered onto elevated profit margins–a dangerous combination, if they should both unwind together.  In previous pieces, I’ve shared my reasons for expecting valuations and profit margins to remain elevated.  But I certainly can’t make any guarantees on that front, nor would I deny that they have room to fall from current levels, even if they remain historically high.

You might think that valuations in Japanese or European equities are more attractive than equities in the U.S.  But after the melt-ups those markets have recently seen, that’s no longer true–particularly if you adjust for differences in sector exposures.  At present, the only segment of the international market that sports a genuinely attractive valuation is the emerging market segment–Brazil, Russia, Turkey, Greece, etc.  If those countries are truly cheap (you never know until after the fact), there are excellent reasons for them to be cheap.  They represent the epicenter of global risk.  Their economies are a mess.

After the massive credit boom of the last decade, emerging markets appear due for a long period of adjustment, a period that is likely to be characterized by slow and challenged growth.  But growth is the compensation that emerging markets are supposed to offer in exchange for weak governance, economic mismanagement, and political turmoil.  Without it, what’s the point?  What reason do investors have to be involved with the asset class? Right now, the biggest risk in investing in emerging markets is the currency risk, which can’t be cheaply hedged out.  To bet on emerging market equities, you have to bet against the dollar.  Given the backdrop, that doesn’t seem like a smart bet to make.

Holding Cash and Waiting

With the present cycle getting long in the tooth, what kind of upside is left?  Is that upside attractive, given the risks?  Not in my opinion.  Nothing in the conventional investment universe is priced attractively relative to its risks–nothing in the fixed income space, nothing in the equity space.

You might therefore think that the right answer is to hold cash and wait for prices to come down.  But how do you know they’re going to come down?  And if they do come down, how will you know when to enter?  It seems that in markets, the people that are inclined to wait, tend to always be waiting.

Given recent market turmoil, bears have become increasingly optimistic that they’re going to get the buying opportunity that they’ve been hoping for.  I disagree.  The worries that have provoked the market’s recent fall are legitimate and will likely represent headwinds to meaningful equity upside going forward, but the risk of deep losses–a grand buying opportunity–seems substantially overblown to me.  Let’s look more closely at the worries:

China.  With respect to China, none of the concerns are new.  People have been raising them for over a decade.  Ultimately, imbalances and excesses in an economy don’t have to mean a “crisis” or a “crash.”  They can simply mean a long period of adjustment, with associated economic underperformance.  That’s the path that China seems to be embarked on.

Ask yourself: what has really changed in China in the last few months, to fuel the present consternation?  The answer, two dramatic headline events have occurred: (1) a crash in the stock market, and (2) a currency devaluation.  These events have rekindled post-crisis disaster myopia, providing investors with a much-needed excuse to sell a market that had long been overdue for a correction.  As time passes, and as disaster fails to ensue, investors will acclimatize to the travails of China’s adjustment. The fear of the unknown that recent headline events have provoked will blow over.

Many investors are interpreting the crash in the Shanghai composite as an omen of things to come.  But that’s not a valid inference.  Shanghai is a highly unsophisticated market–a casino of sorts, traded primarily by retail Chinese investors.  In terms of the implications for the larger global economy, its movements might as well be meaningless.  The fact it would eventually suffer a severe correction should come as a surprise to no one–it had rallied over 100% in only a few months.

Concerns have been raised that the crash will create knock-on effects in the Chinese economy, similar to what happened in the U.S. in 1929.  But there’s a crucial difference to consider.  China is an externally-managed economy, not a free market.  If the Chinese economy slows excessively, the government will simply stimulate.  Unlilke in the U.S., there are no political or legal obstacles to its ability to do that. The only potential obstacle is inflation–and inflation in China is well-contained.

The fact that China is an externally-managed economy, rather than a free-market, dramatically reduces the risk of a “Lehman-like” event.  Lehman happened because the Fed couldn’t legally intervene, or at least didn’t think that it could.  The Chinese government can intervene–and will, without hesitation.  To accelerate the adjustment, it may allow certain parts of its economy to suffer.  But it isn’t going to allow a crisis to develop.

China’s decision to devalue the yuan has stimulated fears of a 1998-style crisis. Those fears are superficial, unsupported by the facts.  Currencies are not pegged to the dollar today in the way that they were in 1998.  Sovereign reliance on dollar funding is not as prevalent.  The countries that are pegged to the dollar, or that have borrowed in dollars, have much larger foreign currency reserves from which to draw.  And so the risk of cascading devaluations and defaults is nowhere near as significant.

Investors forget that yuan’s real exchange rate to the dollar, and especially to the euro and the yen, has appreciated significantly over the last several years.  That appreciation has hit competitiveness.  With the Chinese economy slowing, the devaluation makes perfect economic sense–it’s will provide a needed boost to competitiveness, and a needed stimulus to the export sector.  It would be happening if the currency weren’t pegged.

Oil.  The U.S. economy saw oil price crashes in 1986 and 1998.  As now, corporate earnings took a hit.  But the stimulus of lower gas prices eventually kicked in, and earnings recovered.  There were defaults and bankruptcies, and certain energy-dependent regions of the country slowed, but there was no credit contagion, no larger recession.  What reason is there to expect a different outcome now?

In terms of the question of what will pick up the slack for lost energy investment, the answer is clearly housing.  After a long period of post-crisis under-investment, that sector of the economy has ample room to run.

prfi

The Fed.  Given the Fed’s skillful performance from late 2008 onward, we need to give it the benefit of the doubt.  It’s not going to raise interest rates unless it’s absolutely confident that the economy can handle it.  And, ultimately, if the economy can handle it, the market will be to handle it.  As with all changes, the market will need to acclimatize to the Fed’s new policy stance.  That may limit the upside, or even contribute to further downside.  But there’s no reason to expect it to drive significant market losses, or to set a new recession in motion.

I personally think that the Fed will remain on hold through September, and maybe even through December.  Two years ago this month, the Fed held off on the QE taper plans that it had been signalling to markets.  The argument for holding off now, amid the current global market turmoil, seems much stronger than it was then.

It’s one thing to talk about doing something, it’s an entirely different thing to actually go and do it when the time comes.  The decision as to when to finally pull the trigger on the first hike, in the aftermath of this long and painful recession, is easily the most important central-banking decision the members of this FOMC will ever make. They’re going to want the conditions to be right.  If there are nagging uncertainties, the bias is going to be towards holding off.

The Fed has no compelling reason to tighten right now.  Inflation risk is minimal.  If it were to pick up, the Fed could quickly and easily quash it–and the Fed knows that. But there are strong reasons not to tighten, to wait.  At a minimum, waiting would allow the Fed to get a better read on inflation–which is way below target.  It would also be able to get a better reading on the impact that recent global weakness is having on the U.S. expansion.  You can rest assured that as the moment of truth arrives, the doves on the FOMC–people like Charles Evans–are going to make that argument. And it’s going to be very compelling.

Earnings.  The decline in earnings represents a known known.  Crucially, the driver is not profit margin mean-reversion–profit margins ex-energy remain at record highs. The drivers are losses and asset writedowns in the energy sector, and dollar strength that has depreciated the foreign revenues of multinationals.  Those factors, if they remain in place, are likely to continue to push back on price appreciation.  But they’re unlikely to fuel a larger market downturn.

Picture the following scenario: the obligatory 10% correction is out of the way, we get past the historical danger periods of September and early October, the Fed decides to remain on hold, oil stabilizes, no disaster ensues for China, the global economy continues to muddle through, and the U.S. expansion, led by housing, continues to show strength. Can you see the market rallying back to the highs?  I certainly can.

But can you see it going much farther?  To me, it gets more difficult.  By then, the market will be wrestling with the prospects of additional Fed tightening, the dollar will be strengthening further, creating additional headwinds on earnings, valuations will have become more stretched, as the cycle gets longer and longer in the tooth.  My sense is that it’s going to be hard for investors to find the confidence to keep recklessly tacking on new highs against that backdrop, particularly after the recent turmoil.  The market needs to spend a few years going nowhere.

Now, picture another scenario:  global headlines continue to deteriorate, and we fall another 15%, to around 1650.  Do you not think there will be a substantial number of sidelined investors eager to get in at that price, eager for a second chance to participate in the potential 30% rally back to the highs?  I certainly do.  A move to 1650 would represent almost a 25% correction.  That would be a deeper correction than the worst correction that the market suffered during the pre-Lehman portion of the 2008 recession, nine months in.  If market participants retain any kind of confidence in the outlook for the U.S. economy, they aren’t going to let the market fall to that level–at least not for very long.

What we have, then, is a market with limited upside–a market that could go back to the highs over the next 12 months, but that seems unlikely to be able to go much farther–and limited downside–a market that seems unlikely to completely unravel, at least by more than, say, another 15%.

Solution: Deep In the Money Covered Calls

The best available solution to the current investment conundrum, in my view, is to sell long-dated out of the money put options on the S&P 500.  Selling out of the money put options is a structural source of alpha in markets–investors are behaviorally inclined to pay more for them than they’re actually worth.  Obviously, selling them isn’t always a smart strategy–the price and the backdrop matter.  But right now, with the volatility index spiking over exaggerated fears, you’ll be hard pressed to find a better investment.

The following table shows the prices of September 2016 $SPY puts at various strikes.  The returns quoted are the returns that you will earn if the price at expiration is above the strike price.

fse

To sell a put option in an IRA account, you have to set aside a full cash reserve to buy the equity, in case the option gets executed.  In theory, you are free to earn interest on that cash.  But you can only earn what your broker pays.  Unfortunately, at present, your broker is probably paying something very close to zero.

It’s possible to construct the functional equivalent of a put option at a given strike price by selling covered call options at that strike price–going long the equity at the market price, and then selling calls at the strike price.  Deep in the money covered call options, which mirror out of the money put options, provide a return from premium and a return from the dividends that accrue to the seller over the time horizon.  By put-call parity, the difference between that return and the premium offered by the corresponding out of the money put option should equal the interest rate earned on the cash, plus a premium to cover the risk of the early exercise of the call option, which will deprive the seller of subsequent dividends.

Right now, the difference seems large.  The deep in the money covered call option would therefore seem to be the more attractive choice, especially given that the cash held on reserve to secure the put option won’t be able to earn any interest in an IRA, and that the last dividend that accrues to the deep in the money covered call option will be paid three months before expiration, increasing the cost of exercising the call early so as to receive the dividend.  If the holder of the call chooses to exercise the call before expiration in order to receive the dividend, preventing us from receiving it, fine–we will be “freed” from a 12 month commitment three months early.

fse3

So consider the following strategy.  You take the $1,000,000 in the IRA and sell 60 September 2016 $SPY call contracts at Friday’s closing bid price of 33.55, simultaneously buying 6,000 shares of the $SPY ETF at Friday’s closing ask price of 192.59.  If the S&P remains above 1650 at expiration, the calls will be executed, your shares will be sold away, and your one-year return will be 6.05%, or $60,500.  That includes $SPY’s ~$4.00 annual dividend, which you will collect in four payments over the one year period, to be paid this month, in December, in March, and finally in June.  In exchange for the 6.05% return, you will have to bear the risk of the S&P 500 below 1650.  If the index ends up below 1650, the call contract that was “hedging” your downside will expire worthless, and you will be left holding $SPY shares without protection. However much the index happens to have fallen below 1650, the losses will be yours.

In assessing the attractiveness of 1650 as a price, our tendency is to think back to when the S&P was last at that price.  The S&P was last at 1650 in the fall of 2013, during the trough of the debt ceiling crisis.  So, not very long ago.  But we have to remember that since then, the companies in the S&P 500 have retained substantial earnings–about $130 per share in GAAP EPS.  If current earnings trends hold up through expiration in September of next year, the total retained earnings will have amounted to almost $200.  Those earnings, which are still “in” the companies in the form of cash on the balance sheet, shrunken share count from buybacks, and growth from capex, represent real value that needs to be factored into the assessment.

From September of 2013 to expiration in September of 2016, the CPI will have increased by around 5% on a core basis.  Given that equities are a real asset, a claim on the output of real capital, that increase needs to be factored in as well.  So we use real, inflation-adjusted numbers.

To incorporate the value of retained earnings into comparisons of an index’s prices over time, we can add back cumulative retained earnings to the actual prices observed at prior dates.  Let me try to explain how that would work.  Suppose it’s September 2016, and we’re comparing the S&P’s current hypothetical price of 1650 to it’s price in September of 2013, which was also 1650.  We might think that because the prices are the same, that the respective values are the same.  But over the period, the S&P retained $200 in earnings. What that means is that the 2016 S&P has $200 more in value inside it than the 2013 S&P had.  To correct for that difference, we add the $200 back to the September 2013 price, to make it appear appropriately more expensive.

Making that adjustment, we find that the S&P 500 in September of 2013 was at a value corresponding to something closer to 1850, in comparison with the current price of 1650. Though the actual prices were the same, the true price now is cheaper than the true price was then, by roughly $200–the amount that was earned and retained in the interim.

Retained EPS adjustment works in the same way as inflation adjustment.  When we adjust for inflation, we raise past nominal prices to reflect what those prices would mean in today’s terms.   Similarly, when we adjust an index for retained EPS, we raise its past nominal prices to reflect what those prices would mean in today’s terms–relative to the value that exists in the index now, given the retained earnings.

The following chart normalizes the real prices of the S&P 500 to reflect cumulative retained EPS using a September 2016 basis:

relret

The blue line (left-axis) is the real, inflation-adjusted price of the S&P 500 with retained EPS added back to prior dates to make them look appropriately more expensive.  The green line (right-axis) is the actual nominal price that the index traded at on the given date.  In asking the question, what is it like, from a valuation perspective, to buy the S&P at 1650 in September of 2016, we look for those past dates where the real price of the index, adjusted for retained EPS using a September 2016 basis, was 1650.  Those are found in the places where the blue line intersects with the red line.  I’ve boxed them in black.  To get an idea of what it would be like, from a valuation perspective, to buy the S&P at 1650 in September of 2016, we look at what the actual nominal prices (green line) were on those dates, and use those as a point of reference.

So, with the value of retained EPS appropriately accounted for, buying the S&P at 1650 in September of 2016 would have been “kind of like” buying the S&P at 1335 in May of 2012, or at 1300 in January of 2012, or at 1260 in June of 2011, or at 1185 in October of 2010, or at 1150 in March of 2010.  We can argue over how attractively valued the market was on those dates, but everyone will agree that it was substantially more attractively valued then than it is now, or than it’s been at any time in the last three years.

Another way to assess the value is to simply look at the extent of the decline from the peak. The following table shows the declines for the various strike prices:

corrxn

A price of 1650 (165 for the $SPY) would represent a 22.5% correction from this year’s highs, and a 14.3% correction from Friday’s close.  Those are pretty fat corrections to serve as buffers.  Unless you believe the bull market is over, a 6% annual return in exchange for owning the risk below them seems like a pretty good trade, especially given the lack of alternatives.

Of course, for investors that are more conservative, strike prices that carry less risk–but that still offer an attractive return–are available.  To offer an example, you could earn roughly 4% in exchange for taking on the S&P’s risk below 1350.  Is it realistic to expect the current China-related turmoil to fuel a correction all the way down to 1350, 37% from the peak? If so, I wouldn’t mind being forced to buy.

The 10 year treasury yields roughly 2%. That return comes with meaningful duration risk. If long rates rally by 100 bps, back to where they were at the end of 2013, you will lose roughly 10% on the price change.  If they rally 200 bps, near where they were in late 2009, you will lose roughly 20%. Contrast that risk-reward proposition with the risk-reward proposition of a one year deep in the money covered call at a strike price of 1350. There’s no interest rate risk in that investment. There’s only price risk–the risk that the S&P will have corrected by more than 37% from this year’s highs one year from now. In exchange for taking it, you earn double the treasury return.

To further gauge the value, we can look at the Shiller CAPE values at the various strike prices:

kjesl

At 1650 in September of 2016, the S&P will be trading at a Shiller CAPE (GAAP) of 20.35–a value very close to the Shiller CAPE values observed on the dates that we analogized the 1650 level to earlier, using real retained EPS adjustments.  The fact that the Shiller CAPE values are all roughly the same confirms that the adjustment technique is valid:

gaapd

Using the real reversion technique, and conservatively assuming a mean-reversion in the Shiller CAPE to the post-war average of 17 (a value that the measure has spent less than 5% of its time at over the last 25 years), the implied 10 year total return on the S&P 500 from a September 2016 price of 1650 will be roughly 4% real, 6% nominal.  That’s a reasonably attractive return, given the alternatives.  At 1350, the implied 10 year total return will be roughly 6% real, 8% nominal–roughly in line with the historical average, and very attractive relative to the alternatives.

Now, there are two major risks to selling out of the money put options or deep in the money covered call options. Let’s think more carefully about them:

First, the market could go substantially lower than our strike–it could crash. If that were to happen, we would lose money. But on the bright side, we would lose substantially less money than our fellow long investors, who would get destroyed. Moreover, we would have bought the market at an attractive valuation relative to recent past averages. Importantly, because we aren’t using leverage, we would be able to stay with the position. Over time, the position would recover and produce a decent return–not spectacular, but decent.

Second, the index could stay above the strike, but fall from its current value, presenting a buying opportunity that we might want to take advantage of, but that we won’t be able to, at least not without closing out the option trade and taking a loss.  Unless adequate time passes, generating sufficient option decay, a falling market will put the option trade in the red.

But being realistic, if we choose to hold cash instead of putting on the option trade on, we probably won’t succeed in capitalize on the hypothetical opportunity anyway.  Whenever the bottom comes, we’ll surely miss it, and then we’ll refuse to buy the market as it moves higher, given the fact that we could have bought it lower, and that it could easily fall back down to where it was.  Note that if we do put the option trade on, and we decide to take a loss on it to capitalize on some other opportunity, the loss isn’t going to be very large–at most, a few hundred basis points. We’ll be happy at the fact that we’ll be trading into something better.

Investing is about converting time into money.  The best way to convert time into money in an equity environment where there isn’t much near-term downside or upside, where fear has spiked, and where participants are willing to overpay for insurance, is to sell that insurance.  On average, over time, those that employ a strategy of selling insurance will fare better than those that wait and try to catch bottoms.

Empirical Evidence: Put-Write [Post-Mortem]

The CBOE keeps an index, called the put-write index, which tracks the performance of a strategy of selling, on a monthly basis, at the money one-month put options on the S&P 500.  The following chart shows the performance of the put-write strategy ($PUT) relative to buy and hold, from January 1990 to January of 2015:

put-write

As you can see, the strategy of selling put options wins out over buy and hold, at least on a pre-tax basis.  The outperformance suggests that market participants overpay for protection relative to what it’s actually worth.  On average, whoever was buying the options that the put-writers were selling was making a mistake.

Now, in the initial version of this piece, I proposed a strategy–called $PUT Switching–designed to switch into and out of the put-write strategy based on option valuation.  The strategy worked as follows.  If the monthly close of the volatility index (VIX), the best proxy for option valuation, is above 20, then for the next month, you invest in the put-write strategy.  If the monthly close of the volatility index is below 20, then for the next month, you buy and hold the index.

I illustrated the performance of $PUT Switching in the following chart:

putswitch

Astute reader @econompic e-mailed in to communicate that he was unable to reproduce this result.  So I took a closer look.  I realized that I was carelessly using Robert Shiller’s spreadsheet to build the S&P 500 total return index.  That’s a mistake in this context, because the monthly prices in that spreadsheet are not closing monthly prices, they are the average of the high and low prices of the month.  The strategy, then, was transacting at hypothetical “average” prices that were not concretely available to buyers and sellers at the assumed time of transaction.  That’s cheating.

The strategy is supposed to look at the VIX at the close of the month, and then transact into or out of the market at the closing price for that month, holding until the close of the next month.  What it was actually doing, given my use of Shiller’s data as the basis for the S&P’s total return index, is looking at the VIX at the close of the month, and then transacting into and out of the market at the average of the high and low price of the month.  There’s an inherent retrospective bias in that construction, a bias that artificially boosts the performance of the strategy.

If the VIX is high at the close of the month, conditions are probably bearish, and therefore the price at the close of the month is probably going to be lower than the average price for the month.  If I model the strategy as transacting at the average price for the month, rather than at the closing price, I will probably be giving the strategy a better price at which to switch out of the market than would have actually been available to it at the time of the decision to switch.  The same is true in the other direction.  If the VIX is low at the close of the month, conditions are probably bullish, and therefore the price at the close of the month is likely to be higher than the average price for the month.  If I model the strategy as transacting at the average price for the month, rather than at the closing price, I will probably be giving t he strategy a better price at which to switch into the market than would have actually been available to it at the time of the decision to switch.

Unfortunately, that mistake explains most of the apparent outperformance that you see in the chart above. The following chart corrects the mistake, using an S&P index of exact monthly closing prices:

putcorrect

The 4% annual excess return falls to 1%, with the bulk of the outperformance clustered in the most recent cycle.  Hardly as impressive.  My apologies!

I stand behind everything else I said.  The VIX closed on Friday at 27, double the average of the last three years.  If its current elevation is a sign that the market is headed for a deeper downturn, then fine. We’ll do much better selling out of the money puts or deep in the money covered calls than we will going long equities.  If we’re eventually forced to buy into the market, we’ll be buying at a price that we’re comfortable with, maybe even excited about–as opposed to the current price, which is uninspiring.  If the VIX’s current elevation doesn’t mean anything, if it’s just a symptom of an overdone market trying to correct itself, using headline events as the excuse, then even better.  We’ll get paid handsomely to insure against exaggerated risks that aren’t going to come to fruition.

(Disclaimer: The information in this piece is personal opinion and should not be interpreted as professional investment advice.  The author makes no representations as to the accuracy, completeness, suitability, or validity of any of the information presented.)

(Disclosure: Long $SPY, short January 2016 170 calls, short September 2016 160 calls.)

Posted in Uncategorized | Comments Off on The World’s Best Investment For the Next 12 Months

Fiscal Inflation Targeting and the Cost of Large Government Debt Accumulation

cheney“You know, Paul, Reagan proved that deficits don’t matter.  We won the mid-term elections, this is our due.”

— Vice President Dick Cheney defending a second round of tax cuts against the objection of Treasury Secretary Paul O’Neill, shortly after the 2002 mid-term elections.

Over the last two decades, the Japanese economy has failed to generate healthy levels of inflation.  Some might wonder why that’s a problem.  It’s a problem because healthy inflation is the only reliable indication that an economy is fully utilizing the labor and capital resources available to it.  The fact that Japan has had no inflation, or has even deflated, confirms that it has been operating below its potential, at the expense of the living standards of its population.

What should Japanese policymakers do to restore inflation to healthy levels?  This question is extremely important, not only for the economic future of Japan, but for the economic future of the entire world.  The causes of persistently weak inflation in Japan aren’t entirely understood, but it’s possible–and likely–that they are tied to the effects of slowing population growth, aging demographics, and a growing scarcity of worthwhile investment opportunities, ways to add real value to the economy by creating new capacities that consumers will genuinely benefit from and be eager to spend their incomes on.  If that’s the case, then the entire world will eventually face the problems that Japan currently faces, as the entire world is on Japan’s demographic and developmental path.

For the past 2 years, Japanese policymakers have attempted to use unconventional monetary policy to stimulate inflation.  The Bank of Japan (BOJ) has set an explicit 2% target for the inflation rate, and has effectively promised to purchase whatever quantity of assets it needs to in order to bring the inflation rate to that target.  Unfortunately, asset purchases don’t affect inflation, and so the BOJ has essentially been wasting everyone’s time. In terms of the actual data, the policy has been a clear failure, with core Japanese YOY CPI running at a pathetic 0%, despite a two-year doubling of Japan’s already enormous monetary base.

The solution for Japan, and for any economy that is underutilizing its resources, is to implement an inflation targeting policy that is fiscal rather than monetary–what we might call “Fiscal Inflation Targeting.”  In Fiscal Inflation Targeting, the legislature sets an inflation target–e.g., 2% annualized–and gives the central bank the power to change the rates on a broad-based tax–for example, the lower brackets of the income tax–as needed to bring inflation to that target.  To address scenarios in which inflation chronically undershoots the target, the central bank is given the power to cut tax rates through zero, to negative values, initiating the equivalent of transfer payments from the government to the private sector.  Crucially, any tax cuts or transfer payments that the central bank implements under the policy are left as deficits, and the ensuing debt is allowed to grow indefinitely, with the central bank only worrying about it to the extent that it impedes the ability to maintain inflation on target (via a mechanism that will be carefully explained later).  Note that other macroeconomic targets, such as nominal GDP (nGDP), can just as easily be used in lieu of inflation.

A policy of Fiscal Inflation Targeting would be guaranteed to achieve its stimulatory goals. In terms of direct effects, tax cuts and transfers directly increase nominal incomes, and there is no level of inflation that cannot be achieved if nominal incomes are broadly increased by a sufficient amount.  In terms of indirect effects, having such a powerful tool to use would dramatically increase the economic impact of the central bank’s communications.  On the current approach, to stimulate at the zero lower bound, the central bank is limited to the use of balance sheet expansion, which works primarily by placebo, if it works at all.  But in Fiscal Inflation Targeting, the central bank would have the equivalent of real drugs to use. Economic participants would therefore have every reason to trust in its power, and to act as if its targets would be consistently met.

The insight that fiscal policy should be used to manage inflation, in the way that monetary policy is currently used, is not new, but was introduced many decades ago by the founders of functional finance, who were the first to realize that inflation, and not the budget, is what constrains the spending of a sovereign government.  Advocates of modern monetary theory (MMT), the modern offshoot of functional finance, notably Scott Fulwiller of Wartburg College, have offered policy ideas for how to implement a fiscally-oriented approach.

My view, which I elaborated on in a 2013 piece, is that the successful implementation of any such approach will need to involve the transfer of control over a portion of fiscal policy from the legislature and the treasury to the central bank.  Otherwise, the implementation will become mired in politics, which will prevent the government’s fiscal stance from appropriately responding to changing macroeconomic conditions.

There are concerns that such a policy would be unconstitutional in the United States, since only the legislature has the constitutional authority to levy taxes.  But there is no reason why the legislature could not delegate some of that authority to the Federal Reserve in law, in the same way that it delegates its constitutional authority to create money.  In the cleanest possible version of the proposal, the legislature would pass a law that creates a special broad-based tax, and that identifies a range of acceptable values for it, to include negative values–say, +10% to -10% of earned income below some cutoff.  The law would then instruct the Federal Reserve to choose the rate in that range that will best keep inflation on target, given what is happening elsewhere in the economy and elsewhere in the policy arena.

Ultimately, the chief obstacle to the acceptance and implementation of fiscal inflation targeting is the fear that it would lead to the accumulation of large amounts of government debt.  And it would, particularly in economies that face structural weakness in aggregate demand and that require recurrent injections of fiscal stimulus to operate at their potentials.  But for those economies, having large government debt wouldn’t be a bad thing.  To the contrary, it would be a good thing, a condition that would help offset the weakness.

The costs of large government debt accumulation are not well understood–by lay people or by economists.  In this piece, I’m going to try to rigorously work out those costs, with a specific emphasis on how they play out.  It turns out that there is currently substantial room, in essentially all developed economies that have sovereign control over credible currencies, to use expansive fiscal policy to combat structural declines in inflation, without significant costs coming into play.

The reader is forewarned that this piece is long.  It has to be, in order to make the mechanisms fully clear.  For those that want a quick version, here’s a bulleted summary of the key points:

  • Government debt accumulation increases the net financial wealth of the private sector. (Note: for convenience, from here forward, we will omit the “net” term, and use the terms “net financial wealth” and “financial wealth” to mean the same thing).
  • Increases in financial wealth can lead to increases in spending which can lead to increases in inflation.  The effects need not be immediate, but may only show up after conditions in the economy have changed such that the economy’s wealth velocity–the speed at which its total stock of financial wealth “circulates” in the form of expenditures–and its wealth capacity–its ability to store financial wealth without overheating–have risen and fallen respectively.
  • From a policy perspective, the way to reduce an economy’s wealth velocity and increase its wealth capacity is to raise its interest rate.  But when government debt is overwhelmingly large, interest rate increases tend to be either destabilizing or inflationary, depending on how the government funds the increased interest expense that it ends up incurring.
  • The countries that are the best candidates for fiscal inflation targeting are those that have structurally high wealth capacities–those that are able to hold large amounts of financial wealth without overheating, and that are likely to retain that ability indefinitely into the future.  Examples of such countries include the United States, Japan, the U.K., and the creditor countries of the Eurozone.

The piece is divided into two parts.  I begin the first part by specifying what counts as a “cost” in an economic policy context.  I then examine four myths about the costs of large government deficits: (1) That there can be no free lunch, (2) That large government deficits are unsustainable, (3) That large government deficits cause interest rates to rise, and (4) That large government deficits are necessarily inflationary.

In the second part, I explain the mechanism through which fiscal inflation targeting, and any policy approach that generates exceedingly large quantities of government debt, can sow the seeds of a future inflation problem.  I begin by introducing the concept of “wealth velocity”, which is a modification of the more economically familiar term “money velocity”, and “wealth capacity”, which is a concept analogous to “heat capacity” in thermodynamics. I then explain how changes in wealth velocity and wealth capacity over time can cause a stock of accumulated government debt that wasn’t inflationary to become inflationary, and how the the ability of monetary policy to appropriately respond can be curtailed by its presence.  I conclude the piece with a cost-benefit analysis of fiscal inflation targeting, identifying the countries in the world that are currently the best candidates for it.

Functional Finance and “Costs”: A Focus on the Well-Being of Persons

costs

Suppose that to keep the economy on a 2% inflation target, a government would have to run a large deficit–say, 10% of GDP–in perpetuity.  What would the cost of perpetually running such a deficit be?  Answer: the eventual accumulation of a large quantity of government debt.  But this answer begs the question.  What is the cost of accumulating a large quantity of government debt? Why should such an accumulation be viewed as a bad thing? What specific harm would it bring?

To speak accurately about the “cost” of large government debt, we need to be clear about what kinds of things count as real costs.  To that end, we borrow from the wisdom of the great economist Abba Lerner:

“The central idea is that government fiscal policy, its spending and taxing, its borrowing and repayment of loans, its issue of new money and its withdrawal of money, shall all be undertaken with an eye only to the results of these actions on the economy and not to any established traditional doctrine about what is sound or unsound.  This principle of judging only by effects has been applied in many other fields of human activity, where it is known as the method of science as opposed to scholasticism.  The principle of judging fiscal measures by the way they work or function in the economy we may call Functional Finance.” — Abba Lerner, “Functional Finance and the Federal Debt”, 1943.

In the context of economic policy, a real cost is a cost that entails adverse effects on the well-being–the balance of happiness and suffering–of real people, now or in the future.  A good example of a real cost would be poverty brought on by unemployment.  Poverty brought on by unemployment entails concrete suffering for the afflicted individuals.

Another good example of a real cost would be hyperinflation. Hyperinflation represents a nuisance to daily life; it undermines the enjoyment derived from consumption, forcing consumers to consume because they have to in order to avoid losses, rather than because they want to; it creates an environment in which financially unsophisticated savers end up losing what they’ve worked hard to earn; it makes contractual agreements difficult to clearly arrange, and therefore prevents economic parties from engaging in mutually beneficial transactions; it retards economic growth by encouraging allocation of labor and capital to useless activities designed to protect against it–e.g., precious metal mining. Each of these effects can be tied to the well-being of real people, and therefore each is evidence of a real cost.

In contrast, “not having our fiscal house in order” or “owing large amounts of money to China” or “passing on enormous debts to our children” are not real costs, at least not without further analysis.  They don’t, in themselves, entail adverse effects on the well-being of real people, now or in the future.  Their rhetorical force comes not from any legitimate harms they cite, but from their effectiveness in channeling the implied “moral guilt” of debt accumulation–its connection to short-termism, hedonism, selfishness, impulsiveness, recklessness, irresponsibility, and so on.  In that sense, they are like the prevailing rhetorical criticisms of homosexuality.  “But that’s gross!” is not a valid objection to consensual love-making activities that bring happiness to the participants, and that cause no harm to anyone else.  Similarly, “But we’re spending money we don’t have” is not a valid objection to fiscally expansive policies that improve the general economic well-being without attaching adverse short or long-term economic consequences.

None of what is being said here is meant to deny, off the bat, that large government debt accumulation can bring adverse economic consequences.  It certainly can.  But to be real, and to matter, those consequences need to involve real human interests–they can’t simply be empty worries about how a government’s finances are “supposed to be” run.

Myth #1 — There Can Be No Free Lunch

Imagine a primitive, specialized economy that trades not in money, but in promises. You are skilled at making food, and have extra leftovers from your recent meal to share; I am skilled at construction, and can build shelter–a hut–for you.  Your present hut is currently adequate, but it will eventually fall apart and need to be rebuilt.  So we make a deal. I will rebuild your hut, on your request, and, in exchange, you will give me your leftover food.

now later

Importantly, there’s a time lag between the delivery of my end of the deal and the delivery of your end.  You have food right there, ready to give, and I am ready to take it, now.  But the rebuilding of your hut is only going to take place later, when the need arises.  It follows that in this trade, an actual good–food, right here in front of both of us–is being exchanged for a potential good, a promise to do something at some point in the future, on request.

Now, let’s suppose that my appetite for food is enormous, and that I enter into similar deals with other people, in order to get food from them.  As I go on accumulating more and more hut-building debt, I think to myself, “This is great. I can eat as much as I want, and I don’t have to actually do any work.” Using this tactic, will I be able to secure a “free lunch” for myself–literally, a lunch that I will never have to repay?

The obvious answer is no.  The people that are accepting my promises as compensation aren’t idiots.  They are accepting them in order to one day use them.  If I issue more promises than I can reasonably make due on, then when the owners of those promises eventually come to me wanting to use them, wanting me to rebuild their huts, I’m not going to be able to deliver what I owe.  I won’t have sufficient time or resources to rebuild the shelters of all of the people that I’ve made promises to.  That’s where the perceived “free lunch” will fall apart.  For some, my lunches won’t have been free lunches–they will have been stolen lunches, lunches that I wrongly took without the prospect of repayment, and that I am sure to be retaliated against for having taken.

Our natural inclination is to extend this intuition to the operation of specialized economies that trade in money.  Suppose that there is a disabled homeless man that cannot find gainful employment, a way to contribute.  It’s not his fault–his circumstances are such that there just isn’t anything useful that he can do for anyone, no value that he can add to anyone’s life that would make anyone want to pay him anything.  That said, he has certain needs–food, shelter, clothing, medical attention, and so on.  From a general humanitarian perspective, we want those needs to be met.

As a society, how might we ensure that his needs are met?  The “fiscally honest” way would be to require anyone that earns income to give a portion of that income to the government through taxes, and to then have the government disburse the proceeds to him, and to others like him, to spend on basic necessities.  In imposing this requirement, we would be confiscating a portion of their potential consumption, which they’ve earned through their productive contributions, and which they own in the form of the money they are holding, and transferring that consumption to him.

But is that the only way to ensure that his needs are met?  What if instead of taking money from income earners, and giving it to him and to others like him, we were to simply create new money–print it up from nothing?  In printing new money and giving it to him, we would be giving him the ability to purchase the things he needs, without having to take away anyone else’s money–the undesirable part of the process that we would surely avoid if we could.

Our prior intuition, that there are no free lunches, enters the picture here, and causes us to search for a hidden cost in the approach–a consequence that we haven’t properly acknowledged or accounted for. Surely, things can’t be that easy, that we would be able to use money creation to entirely circumvent the need to actually part with the things that we give away to others.  In terms of what that cost actually is, we normally assume it to be inflation.  In creating new money and spending it, we reduce the value of existing money. In this way, we take wealth from those that are holding it in the form of money, and transfer it to those that aren’t.

But there’s a mistake in this thinking.  The seeds of the mistake were identified long ago, by two philosophers who are far more famous for other ideas: the British philosopher David Hume and the German philosopher Arthur Schopenhauer.

Beginning with Hume,

“It is also evident, that the prices do not so much depend on the absolute quantity of commodities and that of money, which are in a nation, as on that of the commodities, which come or may come to market, and of the money which circulates.  If the coin be locked up in chests, it is the same thing with regard to prices, as if it were annihilated; if the commodities be hoarded in magazines and granaries, a like effect follows.  As the money and commodities in these cases never meet, they cannot affect each other.”  — David Hume, Essays, Moral, Political, Literary, “Of Money“, 1741

Hume’s point was that a simple increase in the stock of money is not enough to cause an increase in prices.  To be inflationary, increases in the money stock have to lead to increases in spending.  If they do not lead to increases in spending, then they will not affect the balance of supply and demand–the balance that determines the trajectory of prices.

Schopenhauer summed up the second seed of the mistake in the following quotes:

“People are often reproached because their desires are directed mainly to money, and they are fonder of it than of anything else. Yet it is natural and even inevitable for them to love that which, as an untiring Proteus, is ready at any moment to convert itself into the particular object of our fickle desires and manifold needs. Thus every other blessing can satisfy only one desire and one need; for instance, food is good only to the hungry, wine only for the healthy, medicine for the sick, a fur coat for winter, women for youth, and so on. Consequently, all these are only relatively good. Money alone is the absolutely good thing because it meets not merely one need considered concretely, but all needs considered abstractly.” — Arthur Schopenhauer, The World as Will and Representation, Volume I, 1818.

“Money is human happiness in the abstract: he, then, who is no longer capable of enjoying human happiness in the concrete devotes his heart entirely to money.” — Arhtur Schopenhauer, Counsels and Maxims, 1851.

The mistake is to assume that anyone who exchanges the output of her time, labor, and capital for money does so because she eventually wants to use that money to consume the output of the time, labor, and capital of someone else.  If money had no value outside of its use in consumption, then this assumption might make sense.  It would be irrational for an individual to work for money that she wasn’t ever going to spend–she would essentially be working for free.  But in an economic system where money is the primary mode of trade, it acquires intangible value–value unrelated to the actual purchase of any concrete good or service.  It comes to represent abstract, psychological goods: happiness, accomplishment, success, optionality, ability, power, safety, security, status, respect, and so on.  A person may have accumulated enough of it to meet her actual future consumption needs many times over, but she will still seek more of it, in pursuit of those goods.

Suppose that Warren Buffet were to make a series of bad investments that were to cause the monetary market value of his wealth–currently, $72.3B–to permanently shrink by 99.9%, leaving “only” $72.3MM to his name.  The loss would not in any way affect his consumption or the consumption of his heirs, for neither he, nor they, are likely to put either amount to consumptive use.  But still, he would be extremely upset by it, and would go to great lengths to avoid it.  Why?  Because money holds intangible value to him, value that is unrelated to its actual use in funding his current and future consumption expenditures.

Behaviorally, he has been trained, from his youth, his days as a much poorer man, to respect money, and to never let it be wasted or left on the table. It is something that is supposed to be cared for, nurtured, grown over time–not shrunk.  To lose so much of it would therefore be frustrating and unpleasant for him, even if the loss made no difference to his lifestyle.

Why does Buffet care about money?  What value does it bring him?

  • Success, Achievement, Scorekeeping.  Money is the way that success and achievement are measured, kept score of–in Buffet’s case, success and achievement at the game of investing, a game that he enjoys playing, and that he wants to win at.  Money is the reward that makes investing a serious game, a game of real significance, that brings pleasure when played well.
  • Power, Freedom.  The knowledge that he can do and have any possible thing that he wants, whenever he wants it, in whatever quantity he wants it–up to infinity. That knowledge brings satisfaction, even if the underlying capacity will never be put to use.
  • Safety, Security.  The knowledge that his wants and needs will never go unmet, that his standard of living, and those of the people he cares about, will never fall to unwanted levels, that the causes he believes in will always have an able advocate in him.  That knowledge makes it easier for him to enjoy the things that he actually does partake in, removing the possibility–and therefore the fear–that his ability to partake in them might one day be compromised.
  • Social Status.  The respect of other people, who admire it as an amazing accomplishment, who rightly interpret it as a sign of his acumen and his value to society, and–let’s be honest–who want to be close to him because of that.  If its value were to fall dramatically, he would lose some of that admiration, that respect, that special attention that he gets from the world.  As a human being with pride, he would surely suffer at the loss.

Many successful individuals in our society that have accumulated large amounts of wealth, Buffet included, have pledged to give it all to charity.  But notice that the pledges are always pledges to give it all to charity at death, never to give it all to charity right now.  But why not give it all to charity right now, keeping only what is necessary to fund future consumption?  Because to do so would require parting with the intangible goods that it confers right now, and that it will continue to confer up until death.

From a Humean perspective, then, Warren Buffet’s $72.3B might as well be “locked up in chests”–it does not circulate in expenditures, and therefore does not affect prices.  That is precisely where the potential “free lunch” of a government deficit lies.  If money is printed anew and given to Warren Buffet, in exchange for work that he does for others, the money will go into his piggy bank, where it will have no effect on anything.  The work done will therefore have been provided at no cost to anyone–no cost to Buffet, no cost to the recipients, and no cost to the collective.  It will have been paid for by the intangible goods that attach to money, goods that the government can create for free.

Now, Warren Buffett is an extreme example.  But any person who puts income into savings, never to be consumed, is operating on the same principle.  And a large number of the participants in our global economy–who collectively control enormous amounts of wealth–do just that.  They put their income into savings, which they never end up consuming.  The fact that they do this is the reason that a significant portion of the world cannot find gainful employment, a way to contribute.  It’s also the reason that a free lunch is on the table, a free lunch that could solve the unemployment problem, if only policymakers knew that it was there to be taken.

Let’s make the point more precise.  There are two things that a person can do with income. First, spend it–use it to consume the economy’s output.  Second, save it. There are two ways that a person can save income.  First, by holding it as money (or trading it for an existing asset, in which case the seller of the asset ends up holding it as money). Second, by investing it–and by “investing it” I mean investing it in the real economy, using it to fund the construction of a new economic asset–a home, a factory, etc.–that didn’t previously exist.

Inflation occurs in environments where excessive demand is placed on the economy’s labor and capital resources–demand for which there is insufficient supply.  From the perspective of inflation, the only income that directly matters is income that is spent or invested–income that is used to put demand on the economy’s labor and capital resources. Income that is held as savings does not put demand on those resources, and therefore has no effect on prices.  It follows that if a government creates new money to finance a deficit, delivering that money as income to someone in the form of a tax cut, a transfer, or a direct payment in exchange for goods and services provided, and if the receiver of the income is not inclined to spend or invest it, but instead chooses to hold it idly in savings, in pursuit of the intangible goods that savings confer, then a free lunch is possible.  Everyone can benefit, without anyone having to sacrifice.

Now, free lunches aren’t on the table everywhere.  But in the economies of the developed world, where there are large output gaps and large overages in the demand for savings relative to the demand for investment, free lunches are on the table.  Unfortunately, many policymakers don’t understand how they work, and therefore haven’t been able to take advantage of them.

Myth #2 — Large Government Deficits Are Unsustainable

If a government runs a large deficit in perpetuity, the debt will grow to an infinitely large value, a value that the government won’t realistically be able to pay back.  But there’s nothing wrong with that.  Government debt is supposed to grow to infinity, along with all other nominal macroeconomic aggregates.  It isn’t supposed to ever be paid back.

In essentially every country and economy that has ever existed, nominal government debts have grown indefinitely.  They have never been fully paid back, only refinanced or defaulted on. The following chart shows the gross nominal debt of the U.S. Federal Government from 1939 to present (FRED: Gross Federal Debt, log scale).  As you can see, there was never a sustained period in which any substantial portion of it was paid down:

gfd

What matters is not the nominal quantity of debt that a government owes, but the ratio of that quantity to the economy’s nominal income, which is the income stream from which the government draws the tax revenues that it uses to service the debt.  That income stream also grows to infinity, therefore its ratio to government debt can stabilize at a constant value, even as the government continues to run deficits year after year after year.

Let d refer to the size of the primary government deficit (before interest expense is paid) as a percentage of GDP, let i refer to the average annual interest rate paid on the outstanding government debt, and let g refer to the economy’s nominal annual GDP growth, given as a percentage of the prior year’s GDP.  We can write a differential equation for the rate of change of the debt-to-GDP ratio over time.  Setting that rate equal to zero and solving, we get the following equation for the debt-to-GDP ratio at equilibrium:

(1) Debt / GDP = d / (g – i)

When we say that a deficit is sustainable, what we mean is that running it in perpetuity will produce a Debt-to-GDP ratio that stabilizes at some value, rather than a debt-to-GDP ratio that grows to infinity.  Per equation (1), any government deficit, run in perpetuity, will be sustainable, provided that the economy’s equilibrium nominal growth rate g exceeds the equilibrium interest rate i paid on the debt.

To illustrate the application of equation (1), suppose that to keep the economy on a 2% inflation target, with 4% nominal growth, a government would have to run a primary deficit of 5% of GDP in perpetuity.  Suppose further that the nominal interest rate that the central bank would have to set in order to control inflation under this regime would be 1%. At what value would the Debt-to-GDP ratio stabilize?

To answer the question, we note that d is 5%, g is 4%, and i is 1%.  Plugging these values into the equation, we get,

(2) Debt / GDP = 5% / (4% – 1%) = 5% / 3% = 166%

Now, let’s increase the interest rate i from 1% to 3%.  Plugging 3% into the equation, we get,

(3) Debt / GDP = 5% / (4% – 3%) = 5%/1% = 500%

As you can see, the Debt-to-GDP ratio is extremely sensitive to changes in the interest rate paid on the debt, particularly as that rate gets closer to the economy’s nominal growth rate.

The following chart shows the amount of time that it would take to get to the 500% Debt-to-GDP equilibrium, assuming a 200% Debt-to-GDP starting point (roughly, Japan’s current ratio, in gross terms).

dngdp

As you can see, it would take a very long time–more than 400 years.  Just to get to 300% Debt-to-GDP would take over 50 years.  It’s misguided, then, for policymakers to worry about the sustainability of deficits that will only need to be run, in worst case scenarios, for a decade or two.

Now, in countries such as Greece and Ecuador, where the government debt is denominated in an external currency, large accumulation of government debt can be dangerous.  The interest rate paid on the debt is set directly by lenders in the market, and therefore the equivalent of a fiscal “bank run” can develop, in which lenders, concerned with rising debt, demand higher interest rates in order to lend, which causes deterioration in the fiscal budget, which increases concern among lenders, which leads them to demand even higher interest rates in order to lend, which causes further deterioration, and so on.

Fortunately, the countries outside of the Eurozone that currently need fiscal stimulus–the U.S., Japan, and the U.K.–have sovereign control over the currencies that their debts are denominated in.  They can therefore set the interest rates on their debts as low as they want to, short-circuiting any attempted run on their finances.  Of course, the consequence of setting interest rates too low might be an inflation–a “run” in a different form–but a direct run, occurring in the form of lenders refusing to lend, can always be stopped.  This point will be addressed in further detail in subsequent sections.

Myth #3 — Large Government Deficits Lead to Rising Interest Rates

It’s often assumed that if a government accumulates debts in large amounts, that the interest rate that the market will demand in order to lend to that government will rise, not only in nominal terms, but in real terms. This assumption is based on two feared dynamics:

  • Positive Feedback Loop: Rising debt implies rising default risk, which causes lenders to demand higher interest rates in order to lend, which worsens the fiscal picture, which increases the default risk, which causes lenders to demand even higher interest rates in order to lend, and so on.
  • Excessive Supply: The government has to find people willing to hold its debt in asset portfolios.  As its debt increases, finding the needed quantity of willing holders becomes more difficult.  Higher interest rates then have to be offered in order to attract such holders.

These concerns are refuted by actual experience.  With the exception of Greece, a country tied down to an external monetary standard from which it is expected to eventually exit, the countries with the highest debt levels in the world relative to GDP–the U.S., Japan, the U.K., and the rest of the Eurozone–are able to borrow at the lowest interest rates in the world.  This relationship isn’t a recent phenomenon–it was observed in past eras as well.  The U.S. and the U.K. accumulated very large government debts relative to GDP in the 1940s. But interest rates during that decade stayed very low–indeed, at record lows–and remained low for more than a decade afterwards.

High debt levels relative to GDP and low interest rates are often observed together because they share a number of the same potential causes.  Persistently weak nominal economic growth, for example, calls for policymakers to lower interest rates.  It also demands more fiscal stimulus, fueling a faster increase in government debt.  The debt-to-GDP ratio itself grows faster because the growth in the denominator of the expression stalls, as debt continues to be added on to the numerator.

Those who fear that large government debt accumulation will put upward pressure on interest rates do not fully understand how interest rates work.  Interest rates in a given currency are ultimately determined by the issuer of that currency.  The mechanisms can be different in different types of systems, but the issuer usually controls the interest rate by expanding or reducing the quantity of loanable funds in the system, which moves the interest rate down or up in accordance with the dynamic of supply and demand.  If the issuer wants to, it can go so far as to create and lend out money directly, at whatever rate it wants, to whomever it wants.

Rising interest rates, then, are not a meaningful risk for a government that owes debt in a currency that it issues.  The government itself gets to determine what its interest rate is going to be.  In practice, the government is going to set interest rates at the minimum level that keeps inflation on target.  Fortunately, in an economy that is suffering from structural weakness in aggregate demand, that level will tend to be very low, allowing for the servicing of a large debt.

The best way to illustrate these points is with a concrete example.  Suppose that in watching the national debt expand indefinitely, lenders in the U.S. were to become afraid of an eventual government default–not immediately, but at some point in the future.  This highly irrational fear would initially manifest itself as a refusal to hold long-dated U.S. treasury securities.  The yields on those securities would therefore rise.  But who cares?  The U.S. government does not need to borrow at the long-end of the curve.  It can borrow at the short-end–indeed, it should borrow at the short-end, to save itself money.  When it borrows at the long-end, it has to pay term-premium to lenders, compensation for requiring them to commit to the loan for an extended period of time.  Paying this premium would make sense if the commitment were of value to the U.S. government–if it created space that made it easier for the U.S. government to find lenders willing to refinance its debt, when the debt comes due.  But the commitment obviously doesn’t have that value, because the U.S. government doesn’t need to find lenders willing to refinance its debt–it effectively has the power to lend to itself.

If the fear of default were to grow acute, it would manifest itself in the form of a refusal on the part of investors to hold short-dated U.S. treasury securities.  The yields on those securities would therefore rise.  What happens next would depend on whether banks retained confidence in the government’s willingness and ability to make good on the securities.

Suppose that banks were to retain that confidence.  The rise in short-dated treasury yields would then create an immediate arbitrage opportunity for them to exploit.  Recall that the Federal Reserve sets the cost of funding for banks by manipulating the aggregate supply of excess reserves available to be lent between individual banks overnight to meet reserve requirements, and by setting the rate on loans made directly to banks for that purpose, through the discount window.  This cost of funding is essentially the short-term interest rate for the U.S. economy.  If the yield on short-term treasury securities were to spike to yields substantially above that rate, then banks could borrow at the rate, use the borrowed funds to buy short-term treasury securities, and collect the spread as profit. Assuming that the U.S. government were able and willing to make good on the securities, this profit would accrue completely risk-free–with zero credit risk and zero duration risk.

It’s tempting to think that if banks were to try to exploit this arbitrage, stepping in and buying high-yield treasuries, that they would have to curtail their loans to the rest of the private sector, to make regulatory space for the loans that they are effectively making to the government.  But this is wrong.  With respect to regulatory capital ratios, treasury securities have a zero risk weighting.  Banks have regulatory space to borrow to buy them in whatever quantity they want, without having to alter any other aspect of their balance sheets or their operations.

But even if banks were unwilling to hold treasury securities, the U.S. government’s ability to borrow would still remain unconstrained.  For the Federal Reserve could solve the problem by directly purchasing the securities in the market, pushing up on their prices and down on their yeilds.  Surely, banks and investors would be comfortable holding the debt of the U.S. government if they knew that the Federal Reserve was in the secondary market, willing to buy them at any price.

To take the example to the maximum extreme, even if banks and investors had become so afraid that they were uncomfortable buying discounted debt that the Federal Reserve was willing to buy from them in the secondary market at par, Congress could simply modify the Federal Reserve Act to allow the Federal Reserve to buy securities directly from the treasury, lending to the treasury directly, without using the secondary market as a conduit.

The point, then, is this. Governments control the interest rates of currencies they issue. The path may be cumbersome, but if a government owes debt in a currency that it has the power to issue, then it can set the interest rate that it pays on that debt as low as it wants, including at zero, which is the interest rate that it pays when it finances itself by issuing money proper.  A situation in which rising interest rates force a currency-issuing government to default is therefore completely out of the question.

Myth #4 — Large Government Deficits Are Necessarily Inflationary

It’s true that large government deficits, and the large equilibrium government debts that those deficits produce, can lead to inflation.  But, as we will explain in the next section, the path is not direct.

As Hume explained in an earlier quote, inflation requires excessive spending–demand that exhausts the economy’s capacity to supply for it.  Large government deficits can certainly be used to finance excessive spending on the part of the government, and such spending can certainly be inflationary.  But it’s not the deficits themselves that produce the inflation, it’s the excessive spending on the part of the government.

Similarly, large government deficits can be used to finance tax cuts and transfers that increase private sector income, leading to excessive private sector spending and eventual inflation. But again, it’s not the deficits themselves that produce the inflation, it’s the excessive private sector spending.  In cases where deficit-financed tax cuts and government transfers are put in place, but do not lead to excessive private sector spending, either because the proceeds are saved, or because there is an output gap, the result will not be inflation.

In summary, large government deficits, run for indefinite periods of time, can provide free lunches, are sustainable, do not lead to rising interest rates, and are not inherently inflationary. The implication, then, is that a policy of fiscal inflation targeting need only focus only on its target, the inflation rate, and that the quantity of government debt that it leaves behind can be ignored. This implication is true in most cases, but it’s not true in all cases. The debt that the policy accumulates can, in theory, become a problem in the future. It’s importantly that we explain how it can become a problem, so that the risks of the policy are not misunderstood.  The sections that follow are devoted to providing that explanation.

Stock and Flow: The Inflationary Mechanism of Government Debt

To better understand the inflationary dynamics of government debt, we need to make a distinction between “stock” and “flow.”  Stock refers to the amount of something that exists; flow refers to the amount of something that moves in a given period of time.

bees2

Imagine a collection of bees swarming around in a cage.  The number of bees in the cage is a stock quantity.  The number of bees that manage to escape from the cage each hour is a flow quantity.

We define money as “legal tender”–whatever must be accepted, by law, to repay debts, public and private.  We define financial wealth as money, or anything that can be readily converted into money in a market, considered in terms of its current market value, minus whatever liabilities are tied to it.  Because financial wealth can be readily converted into money, we can treat it as the functional equivalent of money.

In an economic context, financial wealth has a stock aspect and a flow aspect. The stock aspect is the total amount of it that exists.  The flow aspect is the total amount of it that circulates in the form of expenditures in a given period of time.  Note that by “expenditure” we mean the exchange of money for real goods and services.  The trading of money for other forms of financial wealth is not included, though such trading may occur as part of the process of an expenditure (e.g., I sell a stock in my portfolio to raise money to buy a car).

When a government spends money on a purchase or project, it creates a one-time flow of spending.  This one-time flow takes place regardless of how the spending is funded or financed.  If the flow is inserted into an economy that does not have available resources from which to supply the added demand, the flow will tend to be inflationary.

By “paying for” the spending through taxation, a government can potentially reduce private sector spending flows, “making room” in the economy for its own spending to occur without inflation.  But not all taxation reduces private sector spending flows equally. When taxes are levied on individuals that have a low marginal propensity to spend, the spending flows of the private sector tend to not be substantially affected.  To quote Hume, the taxed money is money would that have remained “locked up in chests” anyways. Taxing it therefore does not free up resources for the government to use.  It is only when taxes are levied on individuals that have a high marginal propensity to spend that taxation reliably frees up resources and offsets the inflationary effects of government spending.

When a government chooses not to “pay for” its spending through taxes, and instead finances its spending with debt or money creation, a stock effect is added to the one-time flow effect of the spending.  Because the spending is never redeemed in taxes, new forms of financial wealth–debt securities and money proper–end up being permanently added to the system as residuals.  The residuals are left without any individual liabilities to offset them.  Of course, they are offset by the liability of government debt, a liability that the private sector bears collective responsibility for.  But, crucially, individuals in the private sector do not view government debt as their own personal liabilities, and therefore do not count it in tallies of their own personal net worth.  Consequently, the net financial wealth of the private sector, as tallied by the individuals therein, increases.

Now, when you increase the stock of something, you tend to also increase its flow, all else equal.  If you increase the stock of bees in a cage, and you change nothing else, you will tend to also increase the number of bees that fly out of the cage as time passes.  To illustrate, suppose that there are 1,000 bees in a cage.  Suppose further that the statistical probability that a given bee will escape in an hour is 0.2%.  How many bees will escape each hour?  The answer: 1,000 * 0.2% = 2.  If you hold that probability constant, and you double the number of bees in the cage to 2,000, how many bees will escape each hour? The answer: 2,000 * 0.2% = 4.  So we see that with the escape probabilities held constant, doubling the stock doubles the flow.

A similar concept applies to financial wealth.  If the net stock of financial wealth in an economy is increased, and if the probability that a given unit of financial wealth will be spent per unit time remains constant through the change, then the amount of financial wealth that circulates in the form of expenditures per unit time–aggregate demand–will rise.

That’s where the true inflationary effect of government debt accumulation lies. Government debt is an asset of the private sector.  When it is held by the central bank, it takes the form of money in the hands of the private sector (the money that the central bank had to create to buy those securities).  When it is held directly by the private sector, it takes the form of government debt securities.  Thus, when government debt is increased, the private sector gains an asset (money or debt securities) without gaining any liabilities (at least not any that it views as such).  It follows that when government debt is increased, the total net financial wealth of the private sector is increased.  Increases in net financial wealth tend to produce increases in spending, and excessive spending can generate inflation.

In the next two sections, I’m going to introduce two related concepts that will be useful in our efforts to understand the inflationary potential of government debt.  Those two concepts are: wealth velocity and wealth capacity.

Wealth Velocity: A New Equation of Exchange

Readers with a background in economics are likely to be familiar with the equation of exchange:

(4) M * V = P * Q

The equation of exchange translates money stock into money flow, using the ratio between them, money velocity.  Here, M is the money supply (the total stock of money in the economy), V is the money velocity (the percentage of the money stock that is spent–i.e., that flows–in a given year, or equivalently, the probability that a given unit of money will be spent in a given year), P is the price index (the conversion factor between nominal dollars and real things), Q is real output (the flow of real things). Note that P * Q is the total nominal spending in the economy.  The equation tells us, trivially, that the total amount of money in the economy, times the probability that a given unit of money will be spent in a given year, gives the total amount of spending in a given year, on average.

The problem with the equation of exchange is that the stock quantity that has the deepest relationship to the flow of spending is not the total stock of money proper, but the total stock of net financial wealth in the economy–money plus everything that can be easily converted into money, considered in terms of its marketable monetary value, minus the monetary value of all debt obligations.  (Note: for convenience, we have often been omitting the “net” term, using the terms “net financial wealth” and “financial wealth” to mean the same thing: money plus marketable assets minus debts.)

The best way to write the equation of exchange, then, is not in terms of M, the total stock of money in the economy, but in terms of W, the total stock of financial wealth in the economy:

(5) W * V = P * Q

Here, Vw is the wealth velocity–the corollary to the money velocity in the original equation of exchange. We can define it as the percentage of financial wealth in existence that gets spent each year, or equivalently, as the probability that a unit of financial wealth will be spent in a given year.  The equation tells us, again trivially, that the total quantity of financial wealth in the economy, times the probability that a given unit of financial wealth will be spent in a given year, gives the total amount of spending that will occur in a given year, on average.

When financial wealth is injected into the private sector through a government deficit, some or all of the wealth may accumulate idly as savings.  Wealth velocity–Vin the equation–will then go down, fully or partially offsetting the increase in W, the stock of wealth.  That’s how large government debt accumulation can occur over time without inflation. The wealth is continually injected via debt accumulation, but the injections coincide with reductions in the velocity of wealth, such that total spending does not increase by a sufficient amount to exceed the productive capacity of the economy and produce inflation.

The problem, of course, is that conditions in the economy can change over time, such that the wealth velocity increases, reverting to its pre-injection value, or rising to some other value.  The prior stock of wealth that was injected, which had been “quiet” because it was being held idly, may then start circulating, and contribute to an inflation.

Wealth Capacity: An Analogy from Thermodynamics

In thermodynamics, there is the concept of “heat capacity.”  The heat capacity of a substance specifies the amount that its temperature will increase when a given amount of heat is added to it.  If its temperature will only increase by a small amount in response to a given injection of heat, then we say that it has a high heat capacity.  If its temperature will increase by a large amount in response to a given injection of heat, then we say that it has a low heat capacity.

Water has a high capacity.  You can add a relatively large amount of heat to a unit of water, and yet its temperature will not increase by very much.  Iron, in contrast, has a low heat capacity.  If you add the same amount of heat to a unit of iron, its temperature will increase significantly.  Intuitively, we can think of the temperature increase as the heat “overflowing” from the iron, which is a poor store of heat.  The heat does not “overflow” from water to the same extent, because water is a good store of a heat.

If the reader will allow us to be somewhat sloppy, we can extend the thermodynamic concept of “heat capacity” to economics, naming the analogous property “wealth capacity.” At a given interest rate, how much financial wealth can an economy store without overheating?  The answer, which is determined by the wealth velocity that will manifest at that interest rate, and the total productive capacity of the economy, specifies the economy’s wealth capacity.  Importantly, an economy’s wealth capacity is specified as a percentage of its potential GDP, which then incorporates productive capacity into the expression.  So, at a given interest rate, an economy might have a wealth capacity of 50% of potential GDP, or 100% of potential GDP, or 200% of potential GDP, and so on.

If an economy has a wealth capacity that exceeds its current quantity of wealth, then it can hold additional financial wealth, and therefore its government can accumulate debt without inflation occurring.  Conversely, if an economy has a wealth capacity equal to or less than its current quantity of financial wealth, then it will not be able to hold additional financial wealth, and therefore government debt accumulation, which involves the injection of financial wealth, will be inflationary.

To make the same point that we made with wealth velocity, wealth capacity is not static, but changes in response to changing macroeconomic conditions.  The risk, then, is that a large injection of wealth will be made, and the economy will appear to be able to absorb it, without inflation.  But over time, as macroeconomic conditions change, the wealth capacity may fall, such that the prior injection, which pushed up the stock of wealth, contributes to an inflation.  That’s why the question of the appropriate level of government debt in an economy is not simply a question of how much financial wealth it can store right now, but a question of how much financial wealth it will be able to store over the long-term, under the different conditions that it will come to face.

The Factors that Influence Wealth Velocity and Wealth Capacity

Though they are not perfect inverses, wealth velocity and wealth capacity are inversely-related to each other.  All else equal, high wealth velocity is associated with low wealth capacity, and low wealth velocity is associated with high wealth capacity.

The following factors influence an economy’s wealth velocity and wealth capacity:

Confidence.  The value of the money that a country issues rests on the confidence that the money will serve as an effective store of value.  As that confidence breaks down, whether in the face of mismanagement or conflict, individuals will be less willing to hold the money, more inclined to spend it in a rush, invest it in real assets, or transfer it abroad.  Thus, as confidence in a country’s money breaks down, its wealth velocity will tend to rise, and its wealth capacity will tend to fall.  The same is true in the other direction.  As a country’s money becomes more credible, more like a reserve currency that the world trusts as a long-term store of value, individuals will be more willing to hold it.  The country’s wealth velocity will then fall, and its wealth capacity will rise.

Location of Homestead.  Do the owners of a country’s financial wealth–the stock of money and marketable securities contained in the country’s economy–live and spend the bulk of their money in that country?  Or do they live and spend the bulk of their money in some other country?  Are they holding wealth in that country because that country is where their responsibilities–financial and otherwise–are located?  Or are they holding wealth in that country because they see an opportunity for a “hot money” profit?  If the former, there will be a greater willingness on the part of the owners of the economy’s financial wealth to hold it, rather than deploy it into consumption or investment, given that the safety and security that comes with holding it will will actually be relevant to the owners’ lives.  It follows that the economy will have a lower wealth velocity and a higher wealth capacity (h/t to @Chris_Arnade for this insightful observation).

This point is important, so let me give an example.  Suppose that I am a Japanese businessman that owns a significant quantity of financial wealth.  Regardless of what happens to the yen’s exchange rate to other currencies, psychologically, I measure the entire financial world in terms of yen.  I fund my lifestyle with yen, all of my debts are in yen.  Having a substantial quantity of yen in savings therefore provides me with a special type of insurance.  It helps ensure that I stay rich, by my own measure of “richness.”  It helps ensure that I retain the ability to fund my lifestyle, and meet my debts.  The fact that there are many people like me, controlling large amounts of Japanese financial wealth, makes it easier for the Japanese government to inject large quantities of yen and yen-denominated securities into the Japanese economy, and find people willing to hold them rather than spend them, even at low rates of return, so that inflation stays low.

But if I am a wealthy Japanese businessman holding my financial wealth in the form of Uruguayan peso, I do not get the same insurance.  Having a substantial quantity of Uruguayan peso in reserve does not reduce the likelihood that I might one day lose the ability to fund my yen-denominated lifestyle, or pay my yen-denominated debts–at least not to the same extent.  And so if the owners of Uruguay’s financial wealth are all people like me, people speculating from abroad, the Uruguayan government will not be able to inject large quantities of new peso and peso-denominated securities into the economy at low rates of return, and have people like me continue to hold them.  We will want to trade them for higher yielding assets in the Uruguayan economy–that’s why we’re involved with Uruguay in the first place, to get a return.  If the prices of those assets get pushed up to unattractive levels in response to the wealth injection, we–or those wealthy foreigners that end up holding our pesos after we sell them–will opt to create new assets in the Uruguayan economy through investment, rather than hold Uruguayan money without compensation.  The result will be inflationary pressure.

Distribution of Financial Wealth.  An economy’s wealth capacity will tend to be higher, and its wealth velocity lower, if the distribution of financial wealth within it is narrow rather than broad.  The reason is obvious.  As an individual’s level of financial wealth increases, her propensity to put additional financial wealth that she comes upon into consumptive use goes down.  It follows that an economy in which the financial wealth is narrowly distributed among a small number of wealthy people will have a lower propensity to consume additional financial wealth, and therefore a higher financial wealth capacity and lower wealth velocity, than an economy where the financial wealth is broadly distributed among the entire population.

Target of the Financial Wealth Injection.  When a government deficit is used to inject financial wealth into an economy, into whose hands is that wealth injected?  Is it injected into the hands of wealth-lacking people who need it to fund their desired lifestyles, and are eager to spend it?  Or is it injected into the hands of wealth-saturated individuals that do not need it to fund their desired lifestyles, and who not are eager to spend it?  The velocity of the wealth–the speed at which it will circulate–will obviously be higher under the former than the latter.

Tendency of Financial Wealth to Collect in Spots.  When wealth is injected and spent, does it continue to move around the economy in a sustained cycle of spending, or does it eventually collect idly in a certain spot, where it gets hoarded?  If it continues to move around, then the wealth capacity will be low; if it tends to collect idly in a certain spot–e.g., in the corporate sector, where it gets hoarded as profit after the first expenditure–then the wealth capacity will be high.

Consumptiveness.  The consumptiveness of an economy refers to the extent to which the individuals that make up the economy are inclined to consume incremental income rather than hold it idly.  All else equal, an economy with higher consumptiveness will have a higher wealth velocity and a lower wealth capacity.  An economy’s consumptiveness is influenced by a number of factors, to include its history, its culture, its demographics, the prevailing sentiment among its consumers, and so on.

Investment Risk Appetite.  The simple fact that an economy likes to save does not mean that it will have a high wealth capacity.  For there are two ways to save.  One can save by holding money idly (or trading it for existing securities, in which case someone else, the seller of those securities, ends up holding it idly), or one can save by investing it in the creation of new assets–new projects, new technologies, new structures, and so on.  The distinction between saving by holding and saving by investing is important because saving by investing involves spending that puts demand on the economy’s existing labor and capital resources. It can therefore contribute to inflation and economic overheating, even as it increases the economy’s productive capacity.

The term “investment risk appetite” refers to the inclination of an economy to save by investing, rather than by holding.  An economy with high investment risk appetite will have a lower wealth capacity than an economy with low investment risk appetite.  As with consumptiveness, an economy’s investment risk appetite is influenced by a number of factors, to include its history, its culture, its demographics, the sentiment and prevailing outlook of its investors, and so on.

In contrast to consumption, the risk that investment will produce inflation is alleviated by the fact that investment adds new assets, new resources, new productivities that the economy can use to supply the additional consumption demand that will be created.  But investment does not deliver those assets, resources, and productivities immediately–there is a time delay.  Moreover, the investment may not be adequate, or appropriately targeted, to supply the additional consumption demand that will be created.  And so inflation and economic overheating are still possible.

The final factor that influences wealth velocity and wealth capacity is the interest rate, a factor that policymakers have direct control over.  Higher interest rates are associated with lower wealth velocity and higher wealth capacity, and lower interest rates are associated with higher wealth velocity and lower wealth capacity.  We discuss this relationship further in the next section.

Interest Rates: The Economy’s Inflation-Control Lever

If pressed, those that are concerned about the risks of large government debt accumulation will usually accept the point that governments can inject financial wealth–new money and debt securities–into the economy without creating inflation, provided that the recipients of that wealth choose to hold it idly rather than convert it into some kind of spending.  But they point out that the simple possibility that the injected financial wealth could be spent, and produce inflation, is reason enough not to make the injection.  To make the injection would be to give the recipients the power to spend, and therefore the power to consume at the expense of other savers, who would lose out in the resulting inflation.

The problem with this point is that economic participants already have the power to consume at the expense of savers.  They can accelerate their consumption, by spending more of what they earn, or by borrowing.  The acceleration will stimulate inflation, which will occur at the expense of those that have chosen to save.  So the inflation risk introduced by government debt accumulation, and associated private sector wealth injection, is a risk that already exists at the current level of government debt, and that would exist at any level of government debt.

For an economy that contains a given quantity of financial wealth, the amount of inflation that it experiences will be determined by the balance of “holding money idly” vs. “deploying money into consumption and investment” that takes place within it.  That balance can shift at any time.  Fortunately, governments have a tool that they can use to manage the balance, so as to control inflation.  That tool is the interest rate.  The interest rate is the expense that individuals must pay in order to borrow money to consume and invest. Conversely, the interest rate is the reward that individuals receive in exchange for holding money idly, rather than deploying it into consumption or investment.  When individuals hold money idly, they take it out of circulation, where it cannot contribute to inflation.

When an economy is suffering from too much activity, the government will set the interest rate at a high level, providing generous compensation to whoever willingly agrees to hold money idly.  When an economy is suffering from too little activity, the government will set the interest rate a low level, removing the reward–or worse, imposing a punishment–on whoever chooses to hold money idly.

In terms of the risks of large government debt accumulation, the primary risk is that the debt will make it more difficult for policymakers to change interest rates in response to changing economic conditions–changing wealth velocities and wealth capacities.  When interest rates are increased in the presence of an overwhelmingly large government debt, the interest expense that the government incurs on that debt increases.  If the increased expense is funded with tax increases and spending cuts, the economy will suffer a destabilizing effect, both economically and politically.  If the increase is funded with additional debt, the result will be added inflationary pressure, because government debt accumulation entails additional private sector wealth injection.

To come back to the example of Japan, right now, Japan is underutilizing its labor and capital resources, a fact demonstrated by its 0% inflation rate.  In terms of stimulus, Japan would benefit from a large fiscal deficit, run over the next several years, if not for longer. But conditions in the Japanese economy might one day change, to where the population’s propensity to consume and invest, rather than hold savings idle, meaningfully increases. If that propensity does change, and if Japan has an enormous government debt to finance when it does, then controlling inflation with interest rate increases could become difficult, if not impossible. I explore this scenario in a later section.

Asset Prices: A Second Inflation-Control Lever

It turns out that there is an additional channel through which interest rates can influence inflation.  Interest rates–in specific, the interest rate paid on cash (money and very short-term low-risk debt securities)–affects the prices of all existing assets.  Rising interest rates make the return on cash more competitive with the return on existing assets, and therefore tend to cause the prices of those assets, in cash terms, to fall.  Conversely, falling interest rates make the return on cash less competitive with the return on existing assets, and therefore tend to cause the prices of those assets, in cash terms, to rise.

Crucially, the quantity of financial wealth in an economy includes not just money proper, but all forms of financial wealth contained within it–debt securities, equity securities, real estate, collectibles, and so on.  When the prices of these assets rise, the stock of financial wealth in the economy rises–not only in a gross sense, but in a net sense, because the liabilities that the assets match to don’t increase in the same way.  The increase in the stock of financial wealth has the potential to lead to an increase in its flow–spending.

The pass-through from asset values to spending tends to be weak with respect to debt and equity securities, because those securities tend to be owned by small, wealthy segments of the population that have a low marginal propensity to spend, and because changes in the prices of the securities aren’t interpreted or trusted to be permanent. But in asset markets where ownership is more evenly distributed across the population, and where the upward price trend is interpreted to be more stable and reliable–in housing markets, for example–the pass-through can be significant.

Trapped Monetary Policy: How Things Can Go Wrong

The best way to illustrate the risk of large government debt accumulation is to use an extreme example.  So here we go.  Suppose that Japan has implemented our recommended policy of 2% fiscal inflation targeting. Suppose further that conditions in Japan are such that to maintain the economy on a 2% inflation target, the country needs to run a 20% deficit with interest rates at 0%.  Suppose finally that the long-term nominal growth rate under the given conditions will be 3%. Assuming that Japan starts with net government debt at its current value, roughly 134% of GDP, the debt would rise to roughly 500% by 2055 and 667% at equilibrium (20% / 3%), to be reached well over 100 years from now.

Now, let’s fast forward to 2055.  By then, conditions in the Japanese economy may have changed.  After 40 years of sustained 2% inflation, the cultural propensity of Japanese consumers to spend rather than save, and of Japanese savers to invest in the real economy rather than hold yen idle, may have increased.  The demographic profile may have improved.  Much of the financial wealth injected by the deficits may have “leaked”, through channels of consumption, investment, inheritance, and charity, from high net worth segments of the economy, where it was pooling, to lower net worth segments of the economy, where the marginal propensity to spend it will be higher.

In practice, these changes, if they were to occur, would be expected to occur gradually, allowing the central bank time to adjust fiscal and monetary policy so as to accommodate them.  But to make the dynamic vivid, let’s assume, for the sake of argument, that the changes occur instantaneously, in the year 2055.  To summarize the scenario, Japan runs a 20% deficit for 40 years, accumulates a government debt worth 500% of GDP, and then suddenly, the macroeconomic backdrop changes.

To frame the dynamic in terms of wealth velocity, the changes, when they occur, will cause the wealth velocity in Japan to increase significantly.  The significant increase in wealth velocity will apply to a very large stock of financial wealth–500% of GDP–and will likely push the economy’s total spending to levels that will exceed its available labor and capital resources.  The result will be inflationary pressure.  To frame the dynamic in terms of wealth capacity, the changes, when they occur, will significantly reduce Japan’s capacity to store financial wealth. The country will no longer be able to hold money and government debt securities worth 500% of GDP at zero interest rates.  The result, again, will be inflationary pressure.

How will the government deal with this pressure?  Clearly, it will need to start by raising taxes and cutting spending, so as to bring its enormous 20% deficit down to something small.  But that will only remove future injections of financial wealth; it will not reverse the prior injections, the potential circulation of which represents the primary inflationary threat.  To reverse the prior injections, the government would have to run a surplus, and there is no way that surpluses sufficient to unwind more than 300% of GDP in added financial wealth could be run in any reasonable amount of time.

Ultimately, the only way to prevent the large stock of wealth from circulating and stirring up inflation would be through an increase in the interest rate.  By increasing the interest rate, the BOJ would give Japanese wealth owners the needed incentive to hold the large supply of yen cash and debt securities that will need to be held.  Without that incentive, the supply will be tossed around like a hot potato, fueling an inflation. (Note: the prices of the debt securities would fall in response to the interest rate increase, causing their yields to rise and making them more attractive to hold, and also lowering the total market value of financial wealth in the economy).

Let’s suppose that in order to manage the inflationary pressure, the BOJ would need to raise interest rates from 0% to 7%.  At an interest rate of 7% and a total debt stock of 500%, the government’s added debt expense would amount to 500% *  7% = 35% of GDP. That’s an enormous expense.  Where would the money to fund it come from?

The government could try to increase taxes or cut spending to make room to pay it, but the total size of the tax increases and spending cuts that would be necessary would be 35% of GDP–a number that would be extremely difficult to successfully draw in, and highly economically and politically destabilizing, if a way to draw it in were found.

It’s likely, then, that the Japanese government would have to borrow to pay the interest. But borrowing–by running a deficit–would entail the injection of a substantial quantity of new financial wealth–35% of GDP, if all of the interest were borrowed–into the already overheating private sector.  That injection–which would accrue to the holders of yen cash and debt securities in the form of  interest income–would represent an additional source of potentially unmanageable inflationary pressure, undermining the effort.

Now, to be fair, it’s possible that Japan could find some way to combine tax increases, spending cuts, and deficit borrowing to pay the interest expense.  But we can’t ignore the inflationary effect of the market’s likely behavioral response to the difficulty.  In practice, economic participants observing the country’s struggles would become increasingly averse to holding yen cash and yen-denominated debt securities.  This aversion would eventually become reflexively self-fulfilling.  Wealth velocity would then rise further, with wealth capacity falling further, adding further inflationary pressure.  Investors would seek to transfer wealth abroad, causing the yen to depreciate relative to other currencies. The depreciation would make Japanese goods more attractive to the world, again, adding further inflationary pressure.

We can see, then, how an eventual bout of high inflation might become unavoidable, even as the central bank does everything in its power to stop it.  The central bank’s preferred tool for combating inflation–the interest rate–would effectively be trapped by the enormous government debt.  Its use in the presence of that debt would exacerbate the inflation, or destabilize the economy, or both.

This type of scenario is not purely theoretically, but has actually played out in economic history, most notably in France in the 1920s.  In more benign contexts, there have been a number of examples of central banks that were forced to surrender to inflation, prevented by their heavily indebted sovereigns from setting interest rates at the levels needed to maintain control over it.  A well-known example is the United States in the years shortly after World War 2, where the Treasury effectively forced the Federal Reserve to hold interest rates at zero, despite double-digit inflation.

In each of these cases, the inflation proved to be a short, temporary occurrence, rather than an entrenched, long-lasting phenomenon.  Conveniently, the problem of the large government debt ended up correcting itself, because the inflation it provoked substantially reduced its value, both in real terms and relative to the nominal size of the economy and the tax base.  The outcome may have been unfair to those savers who ended up holding money at deeply negative real interest rates, but it did not entail any larger humanitarian harms.

Essentially all of the known historical cases in which large government debt has led to inflation have involved war.  One reason why we might expect that to be the case is that war temporarily disables substantial portions of the labor and capital stock of an economy, reducing its productive capacity.  But there’s an additional reason.  War is expensive, it requires the government to take on substantial quantities of debt.  Those quantities are taken on to fund an urgent activity, and so they are allowed to accumulate even as they contribute to excessive inflation.  Crucially, when the war ends, the accumulated stock of debt does not go away.  It is left as financial wealth in the private sector.  As people return home and begin life again, that wealth starts circulating in an inflationary manner–sometimes quickly.

Unlike in war, where government debt is taken on to fund an urgent activity, in fiscal inflation targeting, government debt is taken on in an effort to keep inflation on target, given structural weakness in aggregate demand.  Policymakers therefore have space to respond to early signs of rising inflationary pressure–signs that the structural weakness is abating.  If Japan were to implement fiscal inflation targeting, and needed to run a deficit worth 20% of GDP with rates at zero to achieve that target, the scenario would not play out as Japan running that deficit for 40 years without seeing any changes, and then suddenly seeing its wealth velocity rise dramatically and its wealth capacity fall dramatically.  Rather, the country would run high deficits, macroeconomic conditions would gradually change, with wealth velocity gradually rising and wealth capacity gradually falling, the changes would show up in real-time measurements of inflation, and the central bank would respond, adjusting its fiscal stance, not in an emergency after an unmanageable debt has already been accumulated, but gradually, as the process moves along.

Fiscal Inflation Targeting: A Cost-Benefit Analysis

With the potential inflationary risk of large government debt accumulation now specified–inflation via the mechanism explained in the prior section–we are in a position to weigh the costs of fiscal inflation targeting versus its benefits, and to outline the characteristics that would make an economy with weak aggregate demand and abnormally low inflation a good candidate for the policy.

First, the benefits.  Unlike monetary manifestations of the same concept, fiscal inflation targeting would actually work.  No more of the frustration of having to watch inflation consistently undershoot the central bank’s target (core inflation is below 2% in almost every developed economy in the world right now), and having to bear with the central bank’s excuses for why that’s not a problem, its reasons for remaining optimistic that its targets will eventually be reached.  The central bank would no longer be limited to the use of a placebo, but would have an incredibly powerful tool at its disposal, the ability to inject financial wealth directly into the economy, targeting those segments of the economy where the injections would have the greatest effect.  This tool would give it the ability to forcefully defend its targets, an ability that it does not currently have.

Importantly, the central bank would be able to craft the injections around basic principles of fairness, distributing them broadly and evenly, so that everyone gets to partake in their benefits.  Contrast that with the current monetary approach to stimulus, quantitative easing, which does little more than inflate asset prices through a psychological effect, increasing the financial wealth of only those people who already own it, the people that are the least likely to need a boost.

The ability would allow the central bank to more efficiently use monetary policy for other purposes, such as the promotion of financial stability.  A central bank that had concerns about the financial stability risks of low interest rate policies–increases in private sector leverage, the emergence of asset bubbles, and so on–could run a tighter monetary policy alongside a looser fiscal policy, mitigating financial stability risks without causing the economy’s performance to fall below its potential.

The injections would not directly occupy any resources, as wasteful government spending might, but would instead leave it to the private sector to determine how resources are allocated.  Conducting the stimulus in this way would lead to more efficient labor and capital formation, maximizing the economy’s output over the long-run.

Now, the cost.  The cost is the risk that a debt will build up over time that is so large that it will obstruct the government’s ability to use monetary policy to control inflation as needed.  In the context of inflation targeting, this cost is mitigated by two factors:

  • The debt is being accumulated in response to structural economic weakness, rather than in response to some temporary urgency, such as war.  The likelihood of a rapid inflationary change in macroeconomic conditions–the kind of change, for example, that would occur when a war ends and when everyone returns home to start normal life again–is low.  If the economy’s wealth velocity and wealth capacity do change over time, they will tend to change gradually, affording the central bank space to respond by withdrawing its fiscal injections, or even reversing them by running surpluses. Crucially, with the central bank in control of fiscal policy, the political obstacles that usually get in the way of an appropriate response will not apply.  The central bank will be free to do whatever is needed, without having to worry about the impact on the next election.
  • The cost, even if it is incurred, is a brief, self-correcting event.  If the government’s debt prevents the central bank from raising interest rates as needed to control inflation, and if cyclical conditions improve, then the economy will simply have to endure a period of high inflation.  Conveniently, the period will reduce the value of the debt in real terms, and also relative to the nominal size of the economy and the tax base, which will substantially increase.

The cost-benefit analysis of fiscal inflation targeting is most attractive in the following type of economy:

  • An economy where the currency is issued by a government with a history of political and economic stability, backed by a disciplined and credible central bank.  Ideally, an economy that controls a reserve currency that the rest of the world seeks to save wealth in the form of.
  • An economy whose stock of money and debt securities is held mostly in domestic hands, or in the hands of foreign individuals that get special insurance from holding the securities in lieu of the securities issued by their own countries.
  •  An economy with a high level of wealth inequality, where wealth exhibits a tendency to pool in certain concentrated locations–for example, the corporate sector–when it tries to circulate.
  • An economy with low consumptiveness and low investment risk-appetite.
  • Most importantly, an economy with the above characteristics, where the characteristics have structural explanations, and are expected to remain in place for the long-term.

An economy that meets these criteria will have a high wealth capacity, and will retain that capacity over time, allowing for fiscal injections to be made that stimulate the economy and bring it to its inflation target over the short-term, without incurring undue risk of a larger inflation problem over the long-term.  Conveniently, the mature, developed, aging countries of the world that have the greatest need for ongoing fiscal stimulus–the U.S., Japan, the U.K., and the countries of the Eurozone, especially the creditor countries–exhibit most or all of these characteristics, and are therefore excellent candidates for the policy.

It’s impossible to conclusively know how large an economy’s wealth capacity is until it is reached.  And so, for all we know, a country like Japan might have the ability to hold money and debt securities worth 1000% of its potential GDP, or 2000%, or 3000%, or higher, with interest rates exactly where they are now–at zero–and not overheat.  The fact that the country has accumulated so much debt over time, and yet still struggles to stay out of deflation, suggests that the value is very high, much higher than economists assume, and definitely much higher than its current net debt of 134% of GDP.

The right approach for Japan, then, is to push the limits and find out.  The risk is an unlikely trapping of monetary policy that leads to a one-time inflation that eventually corrects itself; the reward is a likely discovery of a free lunch that changes the economic understanding of government debt forever, and that paves the way for a much-needed solution to the looming problem of demographic and secular stagnation, not only for Japan, but for the entire world that will eventually have to confront it.

Posted in Uncategorized | Comments Off on Fiscal Inflation Targeting and the Cost of Large Government Debt Accumulation

The Trajectory of a Crash

It’s amazing to think that just last Monday, August 17th, the S&P 500 closed at 2102.  Today, it closed at 1868, falling 11.1% in 6 trading days.  The shocking speed of the decline has injected a level of fear into markets not seen since the fall of 2011, when the Eurozone debt crisis was reaching its apex.  Many traders have referenced 1987 as a paradigm for what might happen in a worst case scenario over the coming days and weeks, so I figured it would be interesting to explore where exactly a 1987 scenario would take us in terms of prices.

The chart below shows the hypothetical price trajectory of the S&P 500 over the next 2 years if the current market ends up performing an exact repeat of the 1987 crash, with Friday, October 2nd, 1987 as the crash starting point:

2yr1987

The closing low will occur on Monday, October 19th, 2015–less than two months from now–at an S&P level of 1435, a 32% correction in full.  Fortunately, investors that hold their positions through the plunge will get their money back in short order, less than two years.

The next chart shows the hypothetical price trajectory of the S&P 500 over the next 2 years if the current market ends up performing an exact repeat of the 1974 crash, with Wednesday, March 13th, 1974 as the starting point:

2yr1974

The closing low will occur on Tuesday, March 8, 2016, at an S&P level of 1311, completing a 37% correction in full.  Again, investors that hold their positions through the plunge will get all of their money back within two years.  But that’s only true in nominal terms.  To get their money back in real, inflation-adjusted terms, with reinvested dividends included, they will have to wait until March of 2022.

The next chart shows the hypothetical price trajectory of the S&P 500 over the next 3 years if the current market ends up performing an exact repeat of the 1937 crash, with Wednesday, August 25th, 1937 as the starting point:

3yr1937

The closing low will occur on Tuesday, March 22nd, 2016, at an S&P level of 1144, for a 46% correction in full.  In nominal terms, with reinvested dividends included, investors will have to wait until March of 2021 to get their money back.  In real terms, they won’t get their money back until January of 2023.

Finally, the big one.  This last chart shows the hypothetical price trajectory of the S&P 500 over the next 3 years if the current market ends up performing an exact repeat of the 1929 crash, with Thursday, October 17th, 1929 as the starting point:

3yr1929

The closing low will occur on Tuesday, May 8th, 2018, at an S&P level of … brace for it … 253, an 88% correction in full.  On a nominal total return basis, investors will not get their money back until October of 2030.  Interestingly, given the severe deflation of the period, that date will end up coming much sooner in real terms–October of 2022.

Personally, I don’t expect a correction commensurate with any of these scenarios to play out. Even a 20% correction would surprise me.  But history teaches us that large downward price moves are, and always have been, real possibilities in a market, even when everyone has a story for why they are unlikely.

Posted in Uncategorized | Comments Off on The Trajectory of a Crash

Profit Margins in a “Winner Take All” Economy

The following chart shows the aggregate net profit margin of the S&P 500 using earnings data updated through the 1st quarter of 2015 (75% complete):

spx net profit margin

With yet another quarter now on the books in which profit margins have remained steady at record highs, it’s becoming increasingly difficult for open-minded investors to reject the possibility that “this time is different”–i.e., the possibility that the observed profit margin increase relative to past averages is secular in nature, and that the mean reversion that many have been expecting simply isn’t going to happen.

If the profit margin increase is secular, what is driving it?  Analysts who write on the topic tend to cite two factors associated with the cost structure of the corporate sector: (1) weak labor bargaining power leading to reduced labor costs, and (2) low interest rates leading to reduced interest expense.  Because these factors are likely to remain in place going forward, analysts have argued that profit margins will remain elevated.

On the labor front, labor bargaining power has weakened substantially amid globalization, automation, and the demise of unions.  As a percentage of final sales, labor costs–which consist primarily of wage and pension expenses–have fallen as a percentage of final sales from a previous long-term range of 62% to 64% to a new low of roughly 56%, which translates to a profit margin boost of roughly 7%.  From BEA NIPA Table 1.14:

ae3a

On the interest front, the picture is quite different.  Interest rates have fallen by more than 10% over the last 30 years, but interest expense hasn’t shown a proportionate drop.  As a percentage of final sales, the reduction in interest expense has amounted to a meager 2.5%.  Interestingly, current corporate interest expense is almost twice as high as it was in the 1960s, despite the fact that long-term corporate bond yields are lower today than they were then.  Again, from NIPA Table 1.14:

zeze

The reason that interest expense hasn’t fallen on par with the fall in interest rates is that corporate debt levels have grown substantially alongside the fall.  Recall that total interest expense depends not only on the interest rate paid, but also on the quantity of debt that the interest must be paid on.

The following complicated chart demonstrates the point graphically.  The blue line is the interest expense of nonfinancial corporations as a percentage of final sales.  The orange line is an approximation of that expense using nonfinancial corporate debt outstanding and Moody’s BAA yield as functional inputs.

dfae

In terms of the explanations themselves, reductions in various components of the corporate cost structure may have helped to catalyze the profit margin increase, but they do not explain how corporations have been able to hold on to profit margins in the face of competition.

On classical capitalist economic theory, reductions in corporate costs should not lead to sustained increases in profit margins.  The reason is simple.  With all else equal, increased profit margins imply increased returns on (newly) invested capital (ROIC).  Increased ROICs tend to provoke increased investment inflows into the sector or industry where the ROICs have increased.  These inflows add capacity and therefore intensify competition. They also put upward pressure on corporate costs–wages and interest rates.  The combined result is downward pressure on ROICs and therefore downward pressure on profit margins.  To quote the inventor of capitalism himself:

“The increase of stock, which raises wages, tends to lower profit. When the stocks of many rich merchants are turned into the same trade, their mutual competition naturally tends to lower its profit; and when there is a like increase of stock in all the different trades carried on in the same society, the same competition must produce the same effect in them all.” — Adam Smith, The Wealth of Nations, 1776, I.IX.2

It’s clear, then, that explanations that point to lower labor costs and lower interest expense cannot be the whole story.  They fail to explain how the mechanism of mean-reversion in profitability, a fundamental tenet of the way capitalist economies operate, could be circumvented for so long.

Ultimately, on classical economic theory, there are only two ways for profit margins to experience sustained increases over the long-term:

First, corporate agents can become more risk-averse.  If they become more risk-averse, then higher ROICs will be necessary to entice them to invest–that is, the reward to investment will have to increase to get them to come off the sidelines and take risk.  Investment is the basis for competition, and therefore if higher ROICs are necessary to entice corporate agents to invest, then higher ROICs will be necessary to entice them to compete with each other.  The result will be higher ROICs at the eventual competitive equilibrium.

The data, however, do not support the claim that risk-aversion has increased.  Corporate investment as a percentage of GDP, for example, is higher now than it was at the prior cycle high, and at roughly the same level as the highs of the cycles of the 1960s and early 1970s.

corpinv

A second mechanism through which ROICs can sustainably increase is through increases in barriers to entry.  Barriers to entry keep competition out.  As they get stronger, the players protected by them are able to successfully operate at increased levels of profitability, free from the threat of competition.

This brings us to the subject of the piece, the “Winner Take All” economy.  The point is difficult to quantify or conclusively prove, but it seems that the dramatic technological changes of the last 20 years have made credible competition in certain key sectors of our economy more difficult, and have allowed dominant best-in-breed companies–the $AAPLs, $GOOGs, $MSFTs, $FBs, and so on of the world–to command sustainably higher profit margins.

The current U.S. economy seems to have more genuine monopolies than the economies of old–more companies that face little to no competition.  The increase in monopoly businesses and monopoly products seems be due, at least in part, to the massive distributional, network-creative and network-protective power of the internet, and also the shift towards the production of non-physical things.  A first-mover with a strong intangible product can distribute that product to the entire world at little cost, protect it as intellectual property, and build a profitable user network around it that other corporations will have an increasingly difficult time competing with.

Think for a moment: how would one go about competing with the likes of an $AAPL, $GOOG, $MSFT or $FB–the Iphone, Google search, Windows/Office, or Facebook? Would it even be possible?  These companies have tried to enter into each other’s domains in the past, but they’ve never succeeded–in fact, they’ve never even come close.  Every competitive effort has turned out to be a hopeless waste of time and money–a Microsoft phone, a Facebook search, a Google Plus, and so on.  It’s no wonder, then, that these companies have been able to enjoy elevated profit margins–in excess of 20% on a net basis–that would have been unheard of 50 years ago.  The effect seems to extend, albeit to a lesser degree, to dominant non-technological companies that have been able to leverage modern technology to efficiently expand their customer bases, the pervasiveness and relevance of their brands, and the dominance of their market positions.

To be fair, there are a number of other, non-technological explanations that one can point to in to account for the increase in barriers to entry.  Our economy, for example, has become increasingly complex from a regulatory perspective, and complex regulation tends to make new entry more difficult.  Additionally, over the last few decades, the corporate sector has shown an increased preference for deploying excess capital into mergers and acquisitions rather than new investment, which naturally tends to reduce competition.  The point, however, is that technology, with its creation of massive, ubiquitous companies that are literally impossible to compete with, appears to be the single biggest driver of the profit margin increase in aggregate.

The possibility that increased barriers to entry represent the primary causal factor behind the observed profit margin increase is supported by the fact that the increase has not been broad-based, as would be expected if it were a simple consequence of weak labor bargaining power, low interest rates, or some other generic factor associated with the corporate cost structure.  Rather, it is concentrated in specific sectors–especially the technology sector–and in specifically dominant individual large cap, blue-chip names.

Fortunately, we don’t have to accept the explanation on faith.  We can test it empirically, in actual data.  If a “Winner Take All” economy, with its associated barriers to entry–first-mover barriers, network barriers, patent barriers, size barriers, regulatory barriers, and so on–has allowed an increasingly concentrated group of dominant companies to earn substantially higher profit margins, then we should expect the following.  If we separate companies in the market into different tiers based on their profit margins, we should expect the higher tiers to have seen larger increases in their profit margins in recent decades than the lower tiers.  Increases in the profit margins of the higher tiers–at the expense of the lower tiers–would represent the “Winner” gradually “Taking All.”

I recently asked my favorite blogger, the brilliant Patrick O’Shaughnessy of O’Shaugnessy Asset Management (Twitter: @millennial_inv, Blog: http://www.investorfieldguide.com), to put the hypothesis to the test by separating companies in the market into different bins by profit margin, and then charting the aggregate profit margins of each bin.  If our explanation is correct, then the aggregate profit margins of the higher bins should have increased more over the last few decades than the aggregated profit margins of the lower bins.  Lo and behold, that’s exactly what the data shows.  The profit margin increase of the last 20 years has not been broad-based, shared by all bins, but has instead been concentrated in the highest profit-margin bins.  The companies in those bins have seen their profit margins explode, while the companies in the lower-tier bins have seen little if any increase–and for some bins, an outright reduction.

The following chart, taken from Patrick’s recent blog post, “The Rich Get Richer”, beautifully illustrates the point:

richgetricher

Notice that the profit margins of the various bins moved roughly commensurately up until the late 1980s and early 1990s.  At that point, something happened.  The profit margins of the top bin proceeded to explode, rising by over 1000 basis points (bps).  The profit margins of the next two highest bins stayed roughly flat. And the profit margins of the two lowest bins actually fell–even as the labor and interest costs of companies in those bins were supposedly reduced.  Overall, the dispersion of profit margins increased dramatically–which is the hallmark sign of a “Winner Take All” economy.

The effect was particularly pronounced in the two sectors that we wrote about previously–technology and finance–which together make up more than 40% of S&P 500 earnings.

fintech

As the chart shows, the profit margins of the top bins in these sectors have increased by over 1000 bps.  The profit margins of the lowest two bins, in contrast, have either gone nowhere  or outright contracted.

For reference, here are individual graphs for all sectors:

cat1

cat2

cat3

As our account would predict, companies that are farther away from the “new economy”–for example, companies in the energy and material sectors that sell basic commodities and that are price takers, or companies in the utility sector whose profit margins are determined by the government–do not show the effect.  For the most part, the high profit margin bins in those sectors haven’t seen any more of a profit margin increase than the other bins.  The effect is instead concentrated in sectors where the dominant players have pricing power–technology, finance, consumer staples, consumer discretionary, health care, and to a lesser extent, industrials.

It remains an unresolved question as to whether and for how long the trend towards a “Winner Take All” economy will persist.  But open-minded investors should admit that it could persist for a very long time, if not forever–and that it could even extend further, with profit margins rising further.  At any rate, given that profit margins have stayed firmly elevated for such a long time without any signs of sustainably falling, and given that we now have a compelling explanation for why they would be expected to stay elevated over the long-term, bearishly-inclined investors should seriously consider the possibility that the “mean-reversion” that they’ve been patiently waiting for isn’t going to happen, at least not to the extent expected.

Posted in Uncategorized | Comments Off on Profit Margins in a “Winner Take All” Economy

Capital Recycling at Elevated Valuations: A Historical Simulation

Those who expect U.S. equities to deliver poor returns going forward can cite two compelling reasons in defense of their expectation:

(1) Equity prices are significantly elevated relative to underlying earnings fundamentals.  The S&P 500’s trailing price-to-earnings ratio, for example, is 20.5 on a GAAP basis and 18.8 on an operating basis, more than a full standard deviation above the historical average of ~14 for GAAP, and ~13 for operating.

(2) Earnings, which make up the E in the P/E  ratio, are artificially high, having been pushed up by elevated levels of corporate profitability, which are anywhere from 30% to 70% above their historical averages, depending on the choice of measurement.

Removing the effects of changes in valuation, the average historical real total return for U.S. equities has been roughly 6% per year.  If U.S. equity P/E ratios and profitability levels were to fall back to their historical averages, this 6% return would get dragged down to roughly zero.  Over a 10 year horizon, the P/E ratio compression would subtract roughly (13/18.8)^(1/10)-1 = 3.6%.  The profitability compression, on the generous assumption that current profitability is only 30% above its natural level, would subtract another (1/1.3)^(1/10)-1 = 2.6%.

The problem, of course, is that it’s possible that “this time is different”–with respect to both P/E ratios and profitability levels.  It’s true that P/E ratios are substantially elevated relative to the long-term historical average, but that average might not represent the natural level for the current environment, which is characterized by:

  • Aggressive policymaker advocacy and monetary support for equity markets, rendered possible by an environment of persistently weak inflation.  This advocacy and support increases investor confidence and creates an environment in which there is no alternative (T.I.N.A) to equities for anyone that wants to earn a return (which is almost everyone).
  • Greater cultural affinity for equity investing, brought about, in part, by the historical lesson, now learned by virtually all, that equities are the best place to invest money for the long-term. Equities just don’t get cheap like they used to.  The market in general is too efficient, too adapted, too familiar with its own history to allow that to happen.

These arguments are sure to raise the hairs on certain people’s skins–“this time is different” is a very dangerous claim.  But clichés aside, the underlying claim might be true.  The fact that valuations have managed to stay historically elevated for so long–well over 20 years now–without showing any sign of retreating, increases the probability that the claim actually is true.

On the profitability front, the U.S. economy has evolved dramatically over the course of history.  It’s quite possible that in this evolution, barriers to competitive entry have emerged that didn’t previously exist–first-mover barriers, network ownership barriers, regulatory barriers, patent barriers, and so on.  In trying to understand how dominant, best-in-breed companies–the $MSFT’s, $GOOG’s, $FB’s, and $AAPL’s of the world–have been able to to capture and hold on to absurdly high levels of profitability, in contravention of normal competitive forces, these barriers would seem to be an obvious culprit.

The fact that profitability levels have stayed high for over 20 years now, showing little inclination to sustainably retreat, gives support to this view.  Additional support is provided by the fact that the elevated profitability levels seem to be concentrated in very specific industries and capitalization categories–technology, finance, and large cap multinational–rather than evenly distributed across the overall market.  For that reason, there’s likely to be a sustainable causal explanation for their emergence.

Now, none of this is offered to suggest that valuations and profit margins won’t retreat going forward.  My own view is that they will retreat, and are already in the process of doing so. I just don’t think that they are likely to retreat all the way back to past averages. It seems to me that to expect such an outcome, one has to completely ignore the relevant differences that exist between the modern era and prior eras.

One thing we can be reasonably sure of, however, is that if U.S. equity valuations stay where they are for the long-term, then returns will suffer by a different mechanism.  That mechanism is what I’m now going to try to quantify.

Recall that the Total Return EPS index is an index that tells us what EPS would have been if all historical dividends that were actually paid out to shareholders had instead been diverted into share buybacks, where they would have stayed inside the equity.  Crucially, in constructing the Total Return EPS index, we conduct the share buybacks at hypothetical prices corresponding to the same valuation across history, rather than at the prices that were actually quoted in the market, which encompassed significantly different valuations at different points in time.

Because share buybacks are functionally identical to reinvested dividends in terms of their effects on total return, and because the Total Return EPS index assumes that all share buybacks are conducted at the same valuation, the index effectively tells us what the Total Return to investors would have been if the effect of changes in valuation had been completely removed.

To illustrate, suppose that the U.S. equity market had always traded at 19 times earnings.  If we had bought the market in January 1966, at 19 times earnings, reinvested all of our dividends at 19 times earnings, and then sold out 10 years later, in January 1976, at 19 times earnings, what would our return have been?  To get to the answer, we simply build a Total Return EPS index on the assumption that all share buybacks are conducted at 19 times earnings.  We then calculate the annualized rate of growth of that index from January 1966 to January 1976.  That rate of growth will be the hypothetical total return under the stipulated conditions of constant valuation.

The market’s P/E ratio is substantially elevated on a historical basis.  In order for this elevation to not impose an eventual drag on returns, the elevation will have to persist indefinitely into the future.  But if it persists, then the dividends that corporations pay out to shareholders will be reinvested at historically expensive prices.  Because those dividends will be expensively reinvested, they will accrete at historically depressed rates of return, producing historically depressed total returns.  There is essentially no way to escape this conclusion.

For perspective, if, from 1871 to 2015, the U.S. stock market had always traded at its average historical valuation, shareholders would have earned an average rolling 10 year real total return of approximately 6.3% per year.  Of that return, 1.65% would have come from organic EPS growth, and the other 4.65% would have come from dividends reinvested into the market. But if those dividends had not been reinvested, and had instead been kept idle in a brokerage account collecting interest at the short-term rate, their return contribution would have fallen from 4.65% to well less than 1%, for a total return below 3%, less than half of the actual.  Right off the bat, then, we can appreciate the fact that the reinvestment of dividends, and the implied rate of return at which the reinvestment is conducted, matters to total return–big time.

To quantify the impact that varying dividend reinvestment valuations have on total return, I’m now going to run a series of simulations.  I’m going to build 25 different Total Return EPS indices for the period January 1871 to March 2015, with the share buybacks in each index conducted at Shiller CAPE levels ranging from 5 to 30 in unit increments.  For each Total Return EPS index, I’m going to calculate the average rolling 10 year real growth rate of the index across its history, which, you will recall, is the average rolling 10 year real total return that investors would have earned if valuations had stayed constant at the specified Shiller CAPE level.  I’m then going to chart the total returns as a function of the different valuations. The ensuing charts will give us a clear picture of how much total return we should expect to lose if valuations stay elevated at present levels.

Before I can do that, I need a non-controversial formulation of the Shiller CAPE.  To that end, I’m going to use the Total Return EPS CAPE (the Shiller CAPE adjusted for changes in dividend payout ratios).  I’m going to employ two specific versions of that CAPE: one built on GAAP earnings (which include the questionable goodwill writedowns of the post-2002 era, writedowns that resulted from the application of accounting standards that were not applied to prior eras and that therefore make for distorted historical comparisons), and one built with operating earnings substituted in after 1998 (which exclude the goodwill writedowns, but which might also exclude other types of justified accounting losses that would make a historical comparison to GAAP earnings unfair).

The two versions are shown below alongside the original Shiller CAPE.

all3comp

As you can see, the Total Return EPS CAPE built on GAAP earnings (yellow, current value: 25.96) is not much different from the original Shiller CAPE (red, current value 27.52), which suggests that historical changes in dividend payout ratios haven’t appreciably affected the accuracy of the original.  However, the post-2002 writedowns make a big difference.  When removed, they bring the Total Return EPS CAPE (blue) down to a current value of 21.93.  Note that all of these CAPEs have been normalized so that their averages are equal to the average of the Original Shiller CAPE–14.19 on a harmonic basis.

The Total Return EPS CAPE built on GAAP earnings can serve as a reasonable upper bound for valuation.  It’s unlikely that the market is more expensive than indicated by that measure.  Similarly, the Total Return EPS CAPE built on operating earnings after 1998 can serve as a reasonable lower bound for valuation.  It’s unlikely that the market is cheaper than indicated by that measure, especially considering that the 10 year average earnings off of which the Total Return EPS CAPE is built incorporate the substantially elevated levels of corporate profitability associated with the 2005 to 2015 period.

Now, to the charts.  The following chart shows the hypothetical historical rolling average 10 year real total return that shareholders would have earned from January 1871 to March 2015 if valuations had stayed constant at Shiller CAPE levels of 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, and 30 respectively, with all dividends reinvested at those respective valuations.

histreturntr

Let’s start with the left portion of the chart.  If the Shiller CAPE had been equal to 5 throughout the entirety of U.S. market history, the average rolling 10 year real total return would have been north of 15%, more than double the actually realized value of roughly 6.3%.   Moving to the right, if the Shiller CAPE had been equal to 30 throughout the entirety of U.S. market history, the average rolling 10 year real total return would have been under 4%.  That’s with all other fundamentals held constant, the only differention coming from the different valuations at which the dividends are reinvested.  It goes without saying that the difference–more than 1100 real bps of return per year–matters.

To home in on the current situation, if the Total Return EPS CAPE formed on operating earnings (the lower bound) is the more accurate measure of valuation, and if, going forward, the market remains unperturbed at the current value of 21.93, then we should expect a future total return of 4.72%.  If the Total Return EPS CAPE formed on GAAP earnings (the upper bound) is the more accurate measure of valuation, and if, going forward, the market remains unperturbed at the current value of 25.96, then we should expect a future total return of 4.27%.

So there’s your range: 4.27% to 4.72%.  If valuation bulls win both the valuation argument and the profit margin argument, then that is the approximate return they should expect.  If they lose the valuation argument–if valuations fall back to the historical norm–then the drag described here will not apply.  But a new drag will be inserted: the drag of falling valuation, which pulls down returns by pulling down prices.

The following chart shows the same information as the previous chart, but with the returns expressed as deviations from the historical average rolling 10 year total return of 6.32%.

histr3

As you can see, a permanent CAPE of 5 would have added almost 9% to the actual average historical return.  A permanent CAPE of 30 would have subtracted almost 2.5% from the actual average historical return.  A permanent CAPE equal to the present value, somewhere between 21.93 and 25.96, would have subtracted anywhere from 1.60% to 2.05%.  That range represents a quarter to a third of the actual historical return.  So if market valuations stay where they are, on this “permanently elevated plateau”, investors should prepare to have anywhere from a quarter to a third of their future returns lopped off.

Now, valuation bulls might try to console themselves here by pointing out that reinvested dividends don’t matter as much to returns as they did in the past, since dividend payout ratios are substantially lower than they used to be. Wrong.  The same dividends are still being paid out, they’re just being paid out in a different form: in the form of share buybacks and corporate M&A.  The buybacks and M&A activities are being conducted at the same elevated valuations that the dividends would have been reinvested at.  So the result is the same.

In fact, if anything, the depressing future impact of permanently elevated valuations is likely to be more severe in the current market than it would have been in past markets.  In past markets, corporations grew EPS by investing in organic business growth, which isn’t affected by the market’s valuation.  The current era, however, is characterized by a reduced emphasis on organic business growth, and an increased emphasis on EPS growth through share count reduction–what is perjoratively termed “capital recycling.”

Currently, almost 100% of S&P 500 EPS is being devoted to capital recycling–some combination of dividends, buybacks, and buyouts.  The growth that this recycling will produce will be entirely determined by the market’s valuation–nothing else can make a difference to it but that.  And so if these simulations in historical data are telling us that we should mark down our future return expectations by 25% to 33% of the historical norm, then we should probably mark them down by an even greater amount, because the underlying allocation practice through which they will be driven down–capital recycling that occurs in lieu of organic business growth–is significantly more prevalent now than it was in prior eras.

Posted in Uncategorized | Comments Off on Capital Recycling at Elevated Valuations: A Historical Simulation

A New-and-Improved Shiller CAPE: Solving the Dividend Payout Ratio Problem

A common criticism of Professor Robert Shiller’s famous CAPE measure of stock market valuation is that it fails to correct for the effects of secular changes in the dividend payout ratio.  Dividend payout ratios for U.S. companies are lower now than they used to be, with a greater share of U.S. corporate profit going to reinvestment.  For this reason, earnings per share (EPS) tends to grow faster than it did in prior eras.  But faster EPS growth pushes up the value of the Shiller CAPE, all else equal.  Distortions therefore emerge in the comparison between present values of the measure and past values.

To give credit where it’s due, the first people to point out this effect–at least as far as I know–were Professor Jeremy Siegel of Wharton Business School and his former student, David Bianco of Deutsche Bank.  Siegel, in specific, wrote about the problem as far back as late 2008, during the depths of the financial crisis, when the Shiller CAPE was steering investors away from a market that he considered to be extremely cheap (see “Jeremy Siegel on Why Equities are Dirt Cheap”, November 18, 2008, link here).

In a piece from 2013, I attempted to demonstrate the effect with two tables, shown below:

shillerdiv2

shillerdiv1

The tables portray the 10 year earnings trajectories and Shiller CAPE ratios of two identical companies that generate identical profits and that sell at identical trailing-twelve-month (ttm) P/E valuations. The first company, shown in the first table, pays out 75% of its profit in dividends and reinvests the other 25% into growth (in this case, share buybacks that grow the EPS by shrinking the S). The second company, shown in the second table, pays out 25% of its profit in dividends, and reinvests the other 75% into growth.

As you can see, even though these companies are identically valued in all relevant respects, they end up with significantly different Shiller CAPEs.  The reason for the difference is that the second company reinvests a greater share of its earnings into growth than the first company.  Its earnings therefore grow faster.  Because its earnings grow faster, the act of “averaging” them over a trailing 10 year period reduces them by a greater relative amount.  Measured against that trailing 10 year average, the company’s price, appropriately set in reference to its ttm earnings, therefore ends up looking more expensive.  But, in truth, it’s not more expensive–its valuation is exactly the same as that of the first company.

The following chart illustrates the effect:

highlow

To summarize the relationship:

  • Lower Payout Ratio –> Higher Earnings Growth –> Higher CAPE, all else equal
  • Higher Payout Ratio –> Lower Earnings Growth –> Lower CAPE, all else equal

Now, how can we fix this problem?  A natural solution would be to reconstruct the CAPE on the basis of total return (which factors in dividends) rather than price (which does not). But that’s easier said than done.  How exactly does one build a CAPE ratio–or any P/E ratio–on the basis of total return?

Enter the Total Return EPS Index, explained here and here.  The Total Return EPS Index is a modified version of a normal EPS index that tells us, hypothetically, what EPS would have been, now and at all times in history, if the dividends that were paid out to shareholders had not been paid out, and had instead been diverted into share buybacks. Put differently, Total Return EPS tells us what earnings would have been if the dividend payout ratio had been 0% at all times.  In this way, it reduces all earnings data across all periods of history to the same common basis, allowing for accurate comparisons between any two points in time.

Crucially, in constructing the Total Return EPS, we assume that the buybacks are conducted at fair value prices, prices that correspond to the same valuation in all periods (equal to the historical average), rather than at market prices, which are erratic and often groundless.  To those readers who continue to e-mail in, expressing frustration with this assumption–don’t worry, you’re about see why it’s important.

The following chart shows the Total Return EPS alongside the Regular EPS from 1871 to 2015.  In this chart and in all charts presented hereafter, the index is the S&P 500 (and its pre-1957 ancestry), the values are appropriately inflation-adjusted to February 2015 dollars, and no corrections are made for the effects of questionable accounting writedowns associated with the last two economic downturns:

trp

Now, if all S&P 500 dividends had been diverted into share buybacks, then the price of the index would have increased accordingly. We therefore need a Total Return Price index–an index that shows what prices would have been on the “dividends become buybacks” assumption.

Calculating a Total Return Price index is straightforward.  We simply assume that the market would have applied the same P/E ratio to the Total Return EPS that it applied to the Regular EPS (and why would it have applied a different P/E ratio?). Multiplying each monthly Total Return EPS number by the market’s ttm P/E multiple in that month, we get the Total Return Price index.

In the chart below, we show the Total Return Price index for the S&P 500 alongside the Regular Price, from 1871 to 2015:

trp3

Generating a CAPE from these measures is similarly straightforward.  We divide the Total Return Price by the trailing 10 year average of the Total Return EPS.  The result: The Total Return EPS CAPE.

Shiller himself proposed a different method for calculating a CAPE based on total return in a June 2014 paper entitled “Changing Times, Changing Valuations: A Historical Analysis of Sectors within the U.S. Stock Market: 1872 to 2013” (h/t James Montier). The instructions for the method are as follows: Use price and dividend information to build a Total Return Index. Then, scale up the earnings by a factor equal to the ratio between the Total Return Index and the Price Index.  Then, divide the Total Return Index by the trailing ten year average of the scaled-up earnings.  In a piece from August of last year, I tried to build a CAPE based on Total Return using yet another method (one that involves growing share counts), and arrived at a result identical to Shiller.  The technique and charts associated with that method are presented here.

It turns out that both of these methods produce results identical to the Total Return EPS CAPE method, with one small adjustment: that we conduct the buybacks that form the Total Return EPS at market prices, rather than at fair value prices as initially stipulated. The following chart shows the three types of Total Return CAPEs together.  As you can see, the lines overlap perfectly.

fjksake

The three different versions of the CAPE overlap because they are ultimately doing the same thing mathematically, though in different ways.  Given that they are identical to each other, I’m going to focus only on the Total Return EPS version from here forward.  I’m going to refer to the version that conducts buybacks at fair value prices as “Total Return EPS (Fair Value) CAPE”, and the version that conducts buybacks at market prices as “Total Return EPS (Market) CAPE.”  I’m going to refer to Shiller’s original CAPE simply as “Shiller CAPE.”

The following chart shows the Total Return EPS (Market) CAPE alongside the Shiller CAPE, with the values of the former normalized so that the two CAPEs have the same historical average (allowing for a direct comparison between the numbers).

trmkt

(Note: in prior pieces, I had been comparing P/E ratios to their geometric means. This is suboptimal. The optimal mean for a P/E ratio time series is the harmonic mean, which is essentially what you get when you take an average of the earnings yields–the P/E ratios inverted–and then invert that average.  So, from here forward, in the context of P/E ratios, I will be using harmonic means only.) (h/t and #FF to @econompic, @naufalsanaullah, @GestaltU_BPG)

The current value of the Shiller CAPE is 27.5, which is 93% above its historical average (harmonic) of 14.2.  The current value of the Total Return EPS (Market) CAPE is 30.3, which is 71% above its historical average (harmonic) of 17.8.  Normalized to matching historical averages, the current value of the Total Return EPS (Market) CAPE comes out to 24.2.

At current S&P 500 levels, then, we end up with 27.5 for the Shiller CAPE, and 24.2 for the Total Return EPS (Market) CAPE, each relative to a historical average of 14.19. Evidently, the difference between the two types of CAPEs is significant, worth 12%, or 250 current S&P points.

But there’s a mistake in this construction.  To find it, let’s take a closer look at the chart:

mvout

From the early 1990s onward, the Total Return EPS (Market) CAPE (the red line) is significantly below the Shiller CAPE (the blue line), suggesting that the Shiller CAPE is overstating the market’s expensiveness, and that the Total Return EPS (Market) CAPE is correcting the overstatement by pulling the metric back down.

What is driving the Shiller CAPE’s apparent overstatement of the market’s expensiveness? The obvious answer would seem to be the historically low dividend payout ratio in place from the early 1990s onward.  All else equal, low dividend payout ratios push the Shiller CAPE up, via the increased growth effect described earlier.

But look closely.  Whenever the market is expensive for an extended period of time, the subsequent Total Return EPS (Market) CAPE (the red line) ends up lower than the Shiller CAPE (the blue line), by an amount seemingly proportionate to the degree and duration of the expensiveness.  Note that this is true even in periods when the dividend payout ratio was high, e.g, the periods circled in black: the early 1900s, the late 1920s, and the late 1960s.  If the dividend payout ratio were the true explanation for the deviations between the Shiller CAPE and the Total Return EPS (Market) CAPE, then we would not get that result.  We would get the opposite result: the high dividend payout ratio seen during the periods would depress the the Shiller CAPE relative to the more accurate total measures; it would not push the Shiller CAPE up, as seems to be happening.

The converse is also true.  Whenever the market is cheap for an extended period of time, the subsequent Total Return EPS (Market) CAPE (the red line) ends up higher than the subsequent Shiller CAPE (the blue line), by an amount seemingly proportionate to the degree and duration of the cheapness.  We see this, for example, in the periods circled in green: the early 1920s and the early 1930s through the end of the 1940s.  The deviation between the two measures is spatially small in those periods, but that’s only because the numbers themselves are small–single digits.  On a percentage basis, the deviation is sizeable.

The following chart clarifies:

hsit

So what’s actually happening here?  Answer: valuationnot the dividend payout ratio–is driving the deviation.  In periods where the market was cheap in the 10 years preceding the calculation, the Total Return EPS (Market) CAPE comes out above the Shiller CAPE. In periods where the market was expensive in the 10 years preceding the calculation, the the Total Return EPS (Market) CAPE comes out below the Shiller CAPE.  The degree above or below ends up being a function of how cheap or expensive the market was, on average.

The following chart conclusively demonstrates this relationship:

pinkgreen

The bright green line is the difference between the Total Return EPS (Market) CAPE and the Shiller CAPE as a percentage of the Shiller CAPE.  When the bright green line is positive, it means that the red line in the previous chart was higher than the blue line; when negative, vice-versa.  The pink line is a measure of how cheap or expensive the market was over the preceding 10 years, on average and relative to the historical average. When the pink line is positive, it means that the market was cheap; when negative, expensive.  The two lines track each other almost perfectly, indicating that the valuation in the preceding years–and not the payout ratio–is driving the deviation between the two measures.

What is causing this weird effect?  You already know.  The share buybacks associated with the Total Return EPS (Market) CAPE are being conducted at market prices, rather than at fair value prices.  The same is true for the dividend reinvestments associated with Shiller’s proposed Total Return CAPE and with the version I presented in August of last year; those reinvestments are being conducted at market prices.  That’s wrong.

When share buybacks (or dividend reinvestments) are conducted at market prices, then periods of prior expensiveness produce lower Total Return EPS growth (because the dividend money is invested at unattractive valuations that offer low implied returns).  And, mathematically, what does low growth do to a CAPE, all else equal?  Pull it down.  Past periods of market expensiveness therefore pull the Total Return EPS (Market) CAPE down below the Shiller CAPE, as observed.

Conversely, periods of prior cheapness produce higher Total Return EPS growth (because the dividend money is reinvested at attractive valuations that offer high implied returns).  And what does high growth do to a CAPE, all else equal?  Push it up.  Past periods of market cheapness therefore push the Total Return EPS (Market) CAPE up above the Shiller CAPE, as observed.

Looking at the period from the early 1990s onward, we assumed that the problem was with the Shiller CAPE (the blue line), that the low dividend payout ratio during the period was pushing it up, causing it to overstate the market’s expensiveness.  But, in fact, the problem was with our Total Return EPS (Market) CAPE (the red line).  The very high valuation in the post-1990s period is depressing Total Return EPS (Market) growth (the expensiveness of the share buybacks and dividend reinvestments shrinks their contribution), pulling down on the Total Return EPS (Market) CAPE, and causing it to understate the market’s  expensiveness.

The elimination of this distortion is yet another reason why the buybacks and dividend reinvestments that form the Total Return EPS (or any Total Return Index used in valuation measurements) have to be conducted at fair value prices, rather than at market prices.  Conducting the buybacks and dividend reinvestments at fair value prices ensures that they provide the same accretion to the index across all periods of history, rather than highly variable accretion that inconsistently pushes up or down on the measure.

Now, a number of readers have written in expressing disagreement with this point.  To them, I would ask a simple question: does it matter to the current market’s valuation what the market’s valuation happened to be in the distant past?

Suppose, for example, that in 2009, investors had become absolutely paralyzed with fear, and had sold the market’s valuation down to a CAPE of 1–an S&P level of, say, 50. Suppose further that the earnings and the underlying fundamentals had remained unchanged, and that investors had exacted the pummeling for reasons that were entirely irrational. Suppose finally that investors kept the market at the depressed 1 CAPE for two years, and that they then regained their senses, pushing the market back up to where it is today, in a glorious rally.  In the presence of these hypothetical changes to the past, what would happen to the current value of a Total Return EPS CAPE that reinvests at market prices?  Answer: it would go up wildly, dramatically, enormously, because the intervening dividends that form the Total Return index would have been invested at obscenely low valuations during the period, producing radically outsized total return growth.  What does high growth due to a CAPE? Push it up, so the CAPE would rise–by a large amount.

Is that a desirable result?  Do we want a measure whose current assessment of valuation is inextricably entangled in the market’s prior historical valuations, such that the measure would judge the valuations of two markets with identical fundamentals and identical prices to be significantly different, simply because one of them happened to have traded more cheaply or expensively in the past?  Obviously not.  That’s why we have to conduct the buybacks and reinvestments that make up the Total Return EPS at fair value.

The general rule is as follows.  When we’re using a Total Return index to model actual investor performance–what an individual who invested in the market would have earned, in reality, with the dividend reinvestment option checked off–we need to conduct the hypothetical reinvestments that make up the Total Return index at market prices.  But when we’re using a Total Return index to measure valuation–how a market’s price compares with its fundamentals–then we need to conduct the hypothetical reinvestments at fair value prices.

The following chart shows the Total Return EPS CAPE properly constructed on the assumption that the buybacks and reinvestments occur at fair value prices:

trepsfv

As you can see, the deviation between the two measures comes out to be much smaller. Normalized to the same historical average, the current value of the Total Return EPS (Fair Value) CAPE ends up being 25.9, versus 27.5 for the original Shiller CAPE.  The difference between the total return and the original measures comes out at 5.7%, a little over 100 current S&P points (versus 12% and 250 points earlier).

Surprisingly, then, properly reinvesting the dividends at the same valuation across history more than cuts the deviation in half, to the point where it can almost be ignored.  As far as the CAPE is concerned, when it comes to the kinds of changes that have occurred in the dividend payout ratio over the last 144 years, there appears to be little effect on the accuracy of Shiller’s original version.  The entire exercise was therefore unnecessary. Admittedly, this was not the result that I was anticipating, and certainly not the result that I was hoping to see.  But it is what it is.

It turns out that Shiller was right to reject the dividend payout ratio argument in his famous 2011 debate with Siegel and Bianco:

“Mr. Shiller did his own calculation about the impact of declining dividends on earnings growth and concluded that it is marginal at best, not meriting any adjustment.” — “Is the Market Overvalued?”, Wall Street Journal, April 9th, 2011.

If the subsequent foray into Total Return space caused him to change that view, then he should change it back.  He was right to begin with.  His critics on that point, myself included, were the ones that were wrong.

Now, this is not to suggest that we shouldn’t prefer to use the Total Return version of the CAPE over Shiller’s original version.  We should always prefer to make our analyses as accurate as possible, and the Total Return version of the CAPE is unquestionably the more accurate version.  Moreover, even though the changes in the dividend payout ratio seen in the U.S. equity space over the last 144 years have not been large enough to significantly impact the accuracy of the original version of the CAPE, the differences between the payout ratios of different countries–India and Austria, to use an extreme example–might still be large enough to make a meaningful difference.  Since the Shiller CAPE is the preferred method for accurately comparing different countries on a valuation basis, it only makes sense to shift to the more accurate Total Return version.  Fortunately, that version is simple and intuitive to build using Total Return EPS.

Admittedly, there is some circularity here.  In building the Total Return EPS Index on the assumption of fair value buybacks, we used the Shiller CAPE as the basis for estimating fair value.  If the Shiller CAPE is inaccurate as a measure of fair value, then our Total Return EPS index will be inaccurate, and therefore our Total Return CAPE, which is built on that index, will be inaccurate.  Fortunately, in this case, there’s no problem (otherwise I wouldn’t have done it this way). When you run the numbers, you find that the choice of valuation measure makes little difference to the final product, as long as a roughly consistent measure is used.  You can build the Total Return EPS Index using whatever roughly consistent measure you want–the Total Return CAPE will not come back appreciably different from Shiller’s original. What drove the deviations in the earlier charts were not small differences in the valuations at which dividends were reinvested, but large differences–for example, the difference associated with reinvesting dividends at market prices from 1942 to 1952, and then from 1997 to 2007, at prices corresponding to three times the valuation.

Now, there are other ways of adjusting for the impact of changing dividend payout ratios. Bianco, for example, has a specific technique for modifying past EPS values. As he explains:

“The Bianco PE is based on equity time value adjusted (ETVA) EPS.  We raise past period EPS by a nominal cost of equity estimate less the dividend yield for that period.”

I cannot speak confidently to the accuracy of Bianco’s technique because I do not have access to its details.  But if the method produces a result substantially different from the Total Return EPS CAPE (which it appears to do), then I would think that it would have to be wrong.  When it comes to changing dividend payout ratios, the Total Return EPS CAPE is airtight.  It treats all periods of history absolutely equally in all conceivable respects, perfectly reducing them to a common basis of 0% (payout).  Because it reinvests the dividends at fair value (the historical average valuation), every reinvested dividend in every period accretes at roughly the same rate, which corresponds to the actual average rate at which the market has historically accreted gross of dividends (approximately 6% real).

If our new-and-improved version of the CAPE is appropriately correcting the dividend payout ratio distortions contained in the original version, then the deviation between our new-and-improved version and the original version should be a clean function of that ratio (rather than a function of other irrelevant factors, such as past valuation).  When the dividend payout ratio is low, our new-and-improved version should end up below the original version, given that the original version will have overstated the valuation.  When the dividend payout ratio is high, our new-and-improved version should end up above the original version, given that the original version will have understated the valuation.

Lo and behold, when we chart the deviation between the two versions of the CAPE alongside the dividend payout ratio, that is exactly what we see: a near-perfect correlation (91%), across the full 134 year historical period.

delta

The blue line shows the difference between our new-and-improved version of the CAPE and the original version.  The red line shows the trailing Shiller dividend payout ratio, which is the 10 year average of real dividends per share (DPS) divided by the 10 year average of real EPS.  We use a Shillerized version of the dividend payout ratio to remove noise associated with recessions–especially the most recent one, where earnings temporarily plunged almost to zero, causing the payout ratio to temporarily spike to a value north of 300%.

The fact that the two lines overlap almost perfectly indicates that the deviation between our new-and-improved version and the original version is a function of the factor–the dividend payout ratio–that is causing the inaccuracy in the original version, rather than some other questionable factor.  That is exactly what we want to see.  It is proof positive that our new-and-improved version is correcting the distortion in question, and not introducing or exploiting other distortions (that, conveniently, would make the current market look cheaper).

Now, to be clear, the secular decline in the dividend payout ratio seen across the span of U.S. market history has not substantially affected the accuracy of the original Shiller CAPE.  However, it has substantially affected the trend growth rate of EPS.  So, though it may not be imperative that we use the Total Return version of the CAPE when measuring valuation, it is absolutely imperative that we use the Total Return version of EPS when analyzing earnings trends and projecting out future earnings growth.

We are left with the question: if the distortions associated with the dividend payout ratio are not significant, then why does the Shiller CAPE show the U.S. equity market to be so expensive relative to history?  We can point to three explanations.

  • First, on its face, the market just is historically expensive–even on a non-Shiller P/E measurement.  Using reported EPS, the simple trailing twelve month P/E ratio is roughly 20.5, which is 53% above its historical average (harmonic) of 13.4.  Using S&P corporation’s publication of operating EPS, the simple trailing twelve month P/E ratio is 18.8, which is 40% above that average.
  • Second, the accounting writedowns associated with the 2008-2009 recession are artificially weighing down the trailing average 10 year EPS number off of which the Shiller CAPE is calculated.  Prior to 2014, this effect was more significant than it is at present, given that the 2001-2003 recession also saw significant accounting writedowns.  The trailing 10 year average for the years up to 2014 therefore got hit with a double-whammy.  That’s why the the increase in the Shiller CAPE in recent years has not been as significant as the increase in market prices (since December 2012, the CAPE is up roughly 30%, but prices are up roughly 50%).  2014 saw the 2001-2003 recession fully drop out of the average, reducing the CAPE’s prior overstatement.
  • Third, as the chart below shows, real EPS growth over the last two decades–on both a regular and a Total Return basis–has been meaningfully above the respective historical averages, driven by substantial expansion in profit margins.  Recall that high growth produces a high CAPE, all else equal.

profmarginincluded

These last two factors–the effects of accounting writedowns and the effects of profit margin expansion–will gradually drop out of the Shiller CAPE (unless you expect another 2008-type recession with commensurate writedowns, or continued profit margin expansion, from these record levels).  As they drop out, the valuation signal coming from the Shiller CAPE will converge with the signal given by the simple ttm P/E ratio–a convergence that is already happening.

We conclude with the question that all of this exists to answer: Is the market expensive? Yes, and returns are likely to be below the historical average, pulled down by a number of different mechanisms.  Should the market be expensive?  “Should” is not an appropriate word to use in markets.  What matters is that there are secularsustainable forces behind the market’s expensiveness–to name a few: low real interest rates, a lack of alternative investment opportunities (TINA), aggressive policymaker support, and improved market efficiency yielding a reduced equity risk premium (difference between equity returns and fixed income returns).  Unlike in prior eras of history, the secret of “stocks for the long run” is now well known–thoroughly studied by academics all over the world, and seared into the brain of every investor that sets foot on Wall Street.  For this reason, absent extreme levels of cyclically-induced fear, investors simply aren’t going to foolishly sell equities at bargain prices when there’s nowhere else to go–as they did, for example, in the 1940s and 1950s, when they had limited history and limited studied knowledge on which to rely.

As for the future, the interest-rate-related forces that are pushing up on valuations will get pulled out from under the market if and when inflationary pressures tie the Fed’s hands–i.e., force the Fed to impose a higher real interest rate on the economy.  For all we know, that may never happen.  Similarly, on a cyclically-adjusted basis, the equity risk premium may never again return to what it was in prior periods, as secrets cannot be taken back.

Posted in Uncategorized | Comments Off on A New-and-Improved Shiller CAPE: Solving the Dividend Payout Ratio Problem

Using Total Return EPS to Decompose Historical S&P 500 Performance: Charts from 1871 to 2015

10d

In this piece, I’m going to do five things:

  • First, I’m going to clarify the purpose of Total Return EPS, what it’s trying to accomplish.  In a single sentence, the purpose of Total Return EPS is to convert dividends into EPS so that the fundamental sources of return can be added together into one single term whose past growth rate can be analyzed and whose likely future growth rate can be projected.
  • Second, I’m going to explain why the trend growth rate of Total Return EPS for the S&P 500 (~6%) is roughly equal to the historical average return on equity for the U.S. corporate sector (~6%).  The explanation will include a proposed theory for why return on equity generally reverts to the mean, and also for why it may not revert to its prior mean in the present environment.
  • Third, I’m going to address a question that a significant number of readers have asked: why does Total Return EPS assume that buybacks occur at fair value, rather than at market prices?
  • Fourth, I’m going to show how actual total return can be “decomposed”–i.e., separated out–into three contributing components: (1) Total Return EPS growth, which consists of regular EPS growth plus the return from reinvested dividends (or hypothetical share buybacks), (2) the return contribution from the change in valuation–in this case, the change that occurs in the ttm P/E ratio from price to sale, and (3) the return contribution from interim deviations from fair value–a neglected source of return that arises from the valuations at which dividends are reinvested (or at which shares are hypothetically repurchased), and therefore the rate at which they accrete.
  • Fifth, I’m going to present charts of these components for the S&P 500 from 1871 to 2015, on time horizons of 10, 20, 30, 40, 50, 60, and 70 years.

Three Options: Dividends, Expansion, Repurchases

When the corporate sector earns profit, it can do one of two things: distribute the profit to shareholders as dividends, or reinvest the profit.

  • When it distributes the profit to shareholders as dividends, the shareholders get a direct return–a direct deposit of money into their accounts.
  • When it reinvests the profit, the shareholders get an indirect return–“growth.” The profits earned in future periods, and the future dividends that can be paid from them, increase in size.  In an efficient market, this increase coincides with an increase in the market prices of shares, allowing shareholders to realize a return by selling.

Looking closer at the second option, the corporate sector can reinvest profit in one of two ways: by using it to fund business expansion, or by using it to repurchase equity (or debt). Both options produce growth in earnings per share (EPS).

  • When the corporate sector uses profit to fund business expansion, it adds new capital that it can use to produce and sell additional output to the economy, from which additional income can be earned.  It grows the EPS by growing the E.
  • When the corporate sector uses profit to repurchase equity–for example, by buying back shares on the open market and then cancelling them–it grows the EPS by shrinking the S.  (Note: The corporate sector can also use profit to repurchase or retire debt.  We can view this option as roughly equivalent to the repurchase of equity. Both options entail a reduction in the number of outstanding claims on the corporate sector, rather than an increase in the size of the corporate sector’s operations).

What we have, then, are three destinations for corporate profit: (1) the payment of dividends, (2) investment in business expansion, and (3) the repurchase of equity (or debt).  The first option entails a direct return, a direct deposit of money into shareholder pockets.  The second and third options entail an indirect return, achieved through growth in EPS.

EPS Growth: In Search of a Trend

What we want to know is the “trend” (or “normal”) rate of growth of EPS.  Knowing that trend rate would allow us to roughly estimate the likely future trajectory of EPS, given its position relative to trend.

To illustrate, suppose that the trend rate of EPS growth is 4% per year, but that EPS over the last several years hasn’t grown at all, or worse, has fallen substantially.  We would then expect future EPS growth to be higher than the trend rate, higher than 4%, as EPS catches up.  We would expect there to have been some kind of stunting process–say, a depression in profit margins–that explains the underperformance relative to trend, and that entails the potential for future outperformance, to be unleashed in an eventual mean-reversion.

The problem, of course, is that when we look at the historical data, we do not find a stable, reliable trend growth rate in EPS.  Instead, we find a trend growth rate that has increased substantially over time.  The following table shows averages of rolling 10 year annualized real EPS growth rates for the S&P 500 for the periods 1871 to 1930, 1930 to 1990, and 1990 to 2015, with each period beginning and ending in January:

epsperiods

As  you can see in the table, the average rolling growth rate seen from 1990 to 2015 is four times the rate seen 100 years before it.  And note that this rate is the growth rate of GAAP EPS.  It include the effects of the questionable accounting writedowns that took place in 2003 and 2009.  If we use a corrected version of EPS that excludes those writedowns, the rolling average growth rate for the period increases to 4.76%–more than six times the rate achieved 100 years before it.

The reason for the increase in the trend growth rate of EPS is no mystery.  EPS growth varies inversely with the share of profit that is paid out as dividends.  That share has fallen over time.  The following table shows average payout ratios for the periods in question:

avgporat

Now, there’s a legitimate question to ask here.  How much EPS growth should a given reduction in the dividend payout ratio actually produce? Is the observed increase in the growth rate, from 0.72% to 3.16%, commensurate with the observed decrease in the average payout ratio, from 69% to 52%?  Sadly, there’s no way to know.  Because we have no way to know, we can’t be sure that the reduced payout ratio is the only factor, or even the main factor, driving the increased growth.  For all we know, there could be some other factor driving it, a factor that will substantially impact growth going forward, either positively or negatively.

Return on Equity and Fundamental Total Return

We can distinguish between two sources of shareholder total return.  The first source is “fundamental”, and arises from the payment of dividends and the growth of EPS (or some other relevant fundamental).  The second source is “nonfundamental”, and arises from changes that occur in the valuations of assets between the time of purchase and the time of sale.  If you buy an asset and it pays you a dividend, or its price goes up in response to growth in relevant fundamentals–sales, earnings, net asset values, and so on–that is a “fundamental” return.  If you buy an asset and its price goes up independently of any type of growth in fundamentals–somebody just offers to pay you a higher price for the asset, because they want it more than the next person–that is a “nonfundamental” return.

The item that follows a consistent trend over time is not EPS growth per se, but the fundamental total return that accrues to shareholders–the return that dividends and EPS growth combine to produce.  Let me now explain why that return follows a consistent trend.  Bear with me.

The fundamental total return that accrues to shareholders is a function of the return that corporations generate on their equity, on the amount of capital that was invested to form them.  That return, after all, has to go to someone; it goes to the shareholders, those that made the investment, that put the capital in.

Now, return on equity (ROE) is mean-reverting.  When ROE is high in a given sector or industry, new investment flocks in, seeking to capture the high return.  The new investment leads to excess capacity, increased competition, weakened pricing power, and a reduction in profit that pulls the ROE for the sector or industry back down. When ROE is low in a given sector or industry, new investment stops happening.  The reduction in investment leads to an eventual undercapacity, reduced competition, increased pricing power for the remaining firms, and an increase in profit that pushes the ROE for the sector or industry back up–assuming, of course, that the goods and services being produced are actually wanted by the economy.  If they are not wanted, then the ROE for the industry or sector will go to zero, which is where it belongs for those that make unwanted things.

We’re currently seeing a textbook case of this process play out in the energy sector.  The economy needs a certain amount of oil.  The prior market price of oil–$75+–reflected the marginal cost of producing that amount, plus the extra “oomph” that speculation probably added.  But then efficient new drilling techniques were developed.  The strong profits that these techniques could earn with oil at $75+ led to an investment boom.  The investment boom eventually created an overcapacity that has pushed the price of oil down and that has dramatically lowered the return.  New investment has therefore dried up–and will stay dried up–until an undercapacity develops that increases the price enough to make the return attractive again.

This process of mean-reversion functions at its fiercest in the energy sector, where the good being sold is a pure commodity, and where there are few barriers or “moats” to block out competition and new entry.  But it applies in a general sense to all sectors and industries, and to the aggregate corporate sector as well.

(Caveat: If an economy evolves in a way that entails an increase in the number of barriers and “moats” in place to block out competition and new entry–i.e., in a way that makes it harder for new capital to partake in the high returns that existing capital might be enjoying–then the “mean” that the return on equity reverts to might increase accordingly. It remains an open question as to whether the new technology economy, with its tendency to produce winner-take-all scenarios in which the first mover is forever protected from competition–think $MSFT, $FB, $GOOG, $AAPL, and so on–has provoked such an increase.  I suspect that there is at least some of that effect at play in the much-discussed increase in ROEs and profit margins that we’ve seen take place over the last 20 years.)

Now, because the fundamental total return to shareholders is a function of the ROE, and because the ROE is mean-reverting, the fundamental total return to shareholders–paid out to them in dividends and growth–is similarly mean-reverting. Its mean-reverting nature is the reason that it follows a reliable trend over time.

Some might find this point hard to grasp–it’s admittedly hard to explain. To get a better feel for it, just think about the fundamental return that accrues to energy sector shareholders–shareholders in companies like $XOM and $CVX.  Can you see how the process that produces mean-reversion in energy sector ROEs would also produce mean-reversion in the fundamental return that $XOM and $CVX shareholders receive over time?  The same operating environment that allowed those companies to generate outsized fundamental returns–outsized EPS growth and outsized dividends–when oil was $75+ is what pulled in all of the new investment that fueled the current overcapacity, the squeeze to find buyers of all of the output, that is now pushing those same fundamental returns back down, hedges notwithstanding.

If there had been no new oil to drill, then that would have been a very powerful “moat”, and the high returns that these companies enjoyed might have been sustainable.  But when there is a new discovery that opens up ample new supply with the promise of a high return to anyone with capital that wants to make an investment in it, the high returns–to the new entrants and the existing players–simply will not last.

Ideally, then, we would ditch the effort to find the trend growth rate in EPS, and would instead focus on finding the trend in the metric that actually follows a trend–fundamental total return.  The problem, of course, is that fundamental total return is tied up in two distinct types of terms: EPS growth and dividends.  To properly analyze the trend in that return over time, we need a way to convert the terms into the same type of term, so that they can be added together to produce a single term, a single index.

The optimal way to solve the problem is to convert the dividends into a type of additional EPS, and then add the additional EPS to the actual EPS.  Then, we will end up with one single term that grows over time at a consistent trend rate, whose position relative to trend we can examine and make informed future projections based on.

That is precisely what the technique in the prior piece tries to do.  It tries to convert dividends into additional EPS by hypothetically assuming that dividends are diverted into share buybacks.  It then adds the additional EPS from the hypothetical share buybacks to the EPS that actually occurred, so as to form the unified, all-in-one term being sought: Total Return EPS.

The Equivalence of Reinvested Dividends and Share Buybacks

Now, from a total return perspective, it doesn’t matter whether a corporation chooses to distribute its profit as dividends, or use its profit to buy back shares.

  • If it pays out dividends, the dividends will be reinvested (that’s at least the assumption that “total return” indices hypothetically make).  The reinvestments will cause the number of shares that each shareholder owns to grow.
  • If it buys back shares, its outstanding share count will shrink, and therefore its earnings per share (EPS) will grow.  Mathematically, the growth in its EPS will roughly equal the growth in the number of shares that the shareholder would have come to own via dividend reinvestment.  If the market is efficient–meaning that it properly prices value–then the shareholder will end up no better or worse off, at least on a pre-tax basis (after tax, of course, is a different story).

Another way to express the point: When a corporation buys back shares with money that would otherwise have gone to dividends, it is effectively doing the dividend reinvestments for the shareholders.  It is accumulating shares in their names, as opposed to paying money out to them for them to accumulate shares on their own, independently of the company.  In truth, the two are not perfectly equivalent–share buybacks are actually slightly more accretive than reinvested dividends, for mathematical reasons that are too tedious to try to explain.  But they are close enough.

For any equity market, then, we are free to interchange dividends and share buybacks at will.  We can rebuild total return indices on the assumption that all dividends are hypothetically replaced with share buybacks, or that all share buybacks are hypothetically replaced with dividends–the replacements, if properly constructed, will have no perceptible effect on the total return.

The technique used to build the Total Return EPS index exploits this convenient equivalence.  It assumes, hypothetically, that for all of history, all dividend money that actually got paid was not actually paid, and was instead used to buy back shares.

When we examine Total Return EPS over history, we find that it does follow a reliable trend, as theory would suggest.  Consider the following table, which shows average rolling 10 year EPS and Total Return EPS growth rates over the periods identified in the previous table.

avg

With Total Return EPS, we see a far more consistent growth rate.  That growth rate is roughly on par with the corporate sector’s average historical return on equity, some number close to 6%, as theory would again suggest (FRED).

6percent

Buybacks: Why Fair Value Prices?

The hypothetical buybacks that are used to form the Total Return EPS are assumed to occur not at market prices, but at fair value prices–prices that correspond to an average valuation across history.  A number of readers have tweeted and e-mailed in, asking why we make this assumption.  Why not assume that the buybacks occur at market prices instead, and save the confusion?

The reason is simple.  We’re trying to build an index that captures the fundamental total return that corporations generate for their shareholders through the profits they earn, which they deliver to their shareholders in the form of dividends and EPS growth.  If we were to conduct the buybacks at market prices, then that return would fluctuate based on the market’s valuation–a nonfundamental factor that has nothing to do with those profits.

In conducting the hypothetical buybacks, we are effectively converting dividends into a type of EPS (that gets added to the regular EPS to form the Total Return EPS).  That is, instead of paying out the dividends, we are using them to shrink the S, which effectively adds more EPS (makes EPS bigger by reducing the denominator).

Now, the valuations at which the buybacks occur represents the effective rate of conversion between dividends and EPS.  A low buyback valuation will convert dividends into a large amount of EPS, as the dividends will buy back a large number of shares. Conversely, a high buyback valuation will convert dividends into a small amount of additional EPS, as the dividends will buy back only a small number of shares.

To capture the true fundamental total return, and not add nonfundamental noise associated with where market prices just so happen to be, we need to ensure that the same rate of conversion between dividends and EPS is applied in all periods.  We need to ensure that each dividend, adjusted for size, adds the same amount of relative EPS as every other, regardless of when it happens to be paid.  That’s why we assume that all buybacks occur at the same valuation: “fair value”, which in the previous piece, we defined in terms of the historical average of the Shiller CAPE).

The assumption that the buybacks that underlie Total Return EPS occur at fair value may seem trivial and unimportant, but it makes a meaningful difference.  This difference will become particularly significant when we try to use Total Return EPS to build a new-and-improved Shiller CAPE.  If we use market valuations for the buybacks in Total Return EPS, the new-and-improved Shiller CAPE will end up skewed, giving an unnecessarily inaccurate picture of the market’s true valuation.  I plan to discuss the point in more detail in a later piece on the Shiller CAPE.

An Important Clarification: The Buybacks are Hypothetical

Let me now make an important clarification, based on some of the questions I’ve received. Regular S&P 500 EPS has continually grown throughout history.  Its growth has been driven by both business expansion (real economic investment that adds capital and increases output–EPS growth driven by growth in the E) and the repurchase of equity (buybacks, acquisitions, mergers, and so on–EPS growth achieved by shrinkage of the S).

The Total Return EPS doesn’t modify any of that growth.  Rather, what Total Return EPS does is add the additional growth that would have been produced if the dividends that were paid out had instead been used, completely hypothetically, to buy back shares (or fund acquisitions, mergers, and so on).

The following schematic makes the point more clear:

chart

Some readers have asked: in constructing Total Return EPS, why do you assume that the buybacks occur at fair value prices, when, in reality, corporations like Apple and IBM are buying back their shares at market prices?   This question misses the point.  When I talk about conducting buybacks at fair value prices, I’m not referring to those buybacks, the ones that actually happened in reality, or that are happening now.  Their effects have already shown up, or will show up, in regular EPS growth.  The buybacks that I’m referring to, the ones associated with the construction of Total Return EPS, are hypothetical buybacks–buybacks that didn’t actually happen, that aren’t happening, but that we assume happened or are happening, in lieu of dividends, so as to convert the dividend return into a type of EPS growth.

Interim Valuation: A Neglected Driver of Returns

Our assumption that the hypothetical buybacks occur at fair value highlights a crucial fact about returns that often gets missed.  Valuations matter to returns not only in relation to terminal prices–the price at which you buy and the price at which you sell–but also in relation to interim prices–the prices at which your dividends get reinvested (or, in this context, at which your CEO buys back shares in your name).  As we will later see, this effect is not small, not negligible, even though we might intuitively expect it to be.

In a future piece, I’m going to explore the impact further.  For a quick teaser, consider the following surprising result.  From 1871 to 2015, the actual annualized Total Return for the S&P 500–including the return from changes in valuation–was 6.89%. If, from 1871 to 2015, everything had been kept the same, except that interim prices had been permanently pushed up to a Shiller CAPE equal to the current value of 27.5, with the dividends reinvested at those high prices, rather than at the much cheaper prices that were actually realized, the total return would have been only 4.78%.  That’s more than 200 bps–almost a third of the historical total return–lost to this mechanism.

Remember this fact the next time you find yourself assuming that a policymaker-coddled market that always stays elevated, that never crashes or corrects, would somehow be a good thing for buy-and-hold investors.  It would not be.  The winners in such a market would actually be the impatient, weak-willed, market-timing-prone people who sell to buy-and-hold investors, when those investors go to reinvest their dividends (or when corporations go to buy back shares, which is all they seem to want to do these days).  Those people would never again have to sell at unfair prices, never again have to foot the bill for the bargains that buy-and-hold investors–the Warren Buffets of the world–have historically enjoyed.

In the next section, I’m going to present the theory that underlies the decompositions that will follow at the end of the piece, so that others can reproduce the results themselves.  If you’re not interested, feel free to fast forward to the end, where the charts are presented and discussed.  To briefly summarize, I’m going to arrive at the following two equations:

(6) Total Return EPS Growth = EPS Growth + Return Contribution from Dividends Reinvested at Fair Value

(7) Total Return = Total Return EPS Growth + Return Contribution from Change in P/E Ratio + Return Contribution from Interim Deviations from Fair Value

Along the way, I’m going to explain what each term means, and how each term is calculated from the data.

Decomposing Equity Total Returns: The Theory

In his 1981 magnum opus, Robert Shiller eloquently delineated the fundamental components of equity total return:

“Once we know the terminal price and intervening dividends, we have specified all that investors care about.” — Robert Shiller, “Do Stock Prices Move Too Much to be Justified by Subsequent Changes in Dividends?”, 1981 

We can translate this point loosely as follows:

(1) Total Return = Price Growth + Return Contribution from Reinvested Dividends

We can express price growth in terms of growth in EPS (a fundamental that gets decided by economic processes) and the return contribution from the change in the P/E ratio (a value that gets decided based on the brute forces of supply and demand mixed together in equilibrium with investor beliefs about what is a fair, appropriate, justified, responsible, sufficiently-rewarding price to pay).

(2) Price Growth = EPS Growth + Return Contribution from Change in P/E Ratio

Substituting (2) into (1) we get:

(3) Total Return = EPS Growth + Return Contribution from Change in P/E Ratio + Return Contribution from Reinvested Dividends

Now, let’s look at this last term, Return Contribution from Reinvested Dividends.  We can express this term as the combination of (a) the Return Contribution from Dividends Reinvested at Fair Value and (b) the Return Contribution from Interim Deviations from Fair Value.  The return contribution from reinvested dividends is the return that would have accrued if they had been reinvested at fair value, plus the “extra” return (positive or negative) that has arisen from the fact that, in reality, they were not actually reinvested at fair value, but were reinvested at higher or lower valuations, producing a lower or higher return.

We end up with:

(4) Return Contribution from Reinvested Dividends = Return Contribution from Dividends Reinvested at Fair Value + Return Contribution from Interim Deviations from Fair Value

Combining (3) and (4) we get a total return equation with four components:

(5) Total Return = EPS Growth + Return Contribution from Dividends Reinvested at Fair Value + Return Contribution from Change in P/E Ratio + Return Contribution from Interim Deviations from Fair Value

Now, to substitute in Total Return EPS, we recall that Share Buybacks and Reinvested Dividends are the same thing.  This means:

(6) Total Return EPS Growth = EPS Growth + Return Contribution from Dividends Reinvested at Fair Value

Inserting (6) into (5) we get:

(7) Total Return = Total Return EPS Growth + Return Contribution from Change in P/E Ratio + Return Contribution from Interim Deviations from Fair Value

These two equations, (6) and (7), are the equations that we are going to visually plot. Before we can do that, however, we need to find a way to quantify the terms in each equation.

We do that as follows:

    • (Regular) EPS Growth: Trivial.  We calculate the annualized % change between the starting and finishing values of (regular) EPS.
    • Total Return EPS Growth: Again, trivial.  We calculate the annualized % change between the starting and finishing values of Total Return EPS.  Directions for how to build the Total Return EPS index can be found in the previous piece.
    • Return Contribution from Dividends Reinvested at Fair Value: We take the difference between Total Return EPS Growth and Regular EPS Growth.  This difference equals the contribution from reinvested dividends (or, alternatively, the contribution from share buybacks–they are the same thing).
    • Return Contribution from Change in P/E Ratio: We take the difference between price growth and EPS Growth.  This difference just is the return contribution from the change in the P/E ratio.

Now, to get the final term, the Return Contribution from Interim Deviations from Fair Value, we need to build a new index.  Call that index the “Total Return EPS with Purchases at Market Prices” index.  This index is identical to the Total Return EPS index, except that the buybacks are conducted (or the dividends reinvested) at market prices rather than at fair value prices.

    • Return Contribution from Interim Deviations from Fair Value: Take the difference between the annualized growth of “Total Return EPS with Purchases at Market Prices” and the annualized growth of Total Return EPS.  This difference just is the added return that comes from buying back shares (or reinvesting dividends) at market valuations that do not always average to fair value.

Charting the Decomposition

In the following charts, I’m going to decompose–i.e., separate out–the historical S&P 500 total return into three contributing components: Total Return EPS Growth (purple), Return Contribution from Change in P/E Ratio (orange), and Return Contribution from Interim Deviations from Fair Value (blue).  I’m going to further decompose Total Return EPS Growth into two components: Return from Reinvested Dividends (identical to Return from Hypothetical Share Buybacks) (green) and Regular EPS Growth (yellow). Recessionary periods for the U.S. economy will be shaded in gray.

The decompositions will be conducted on the returns at time horizons of 10, 20, 30, 40, 50, 60, and 70 years, from 1871 to 2015.  For each time horizon, there will be two separate charts (miniatures shown below), with the first chart decomposing the total return, and the second chart decomposing the Total Return EPS.  As with all numbers in this piece, the growth rates and returns are real, properly adjusted for inflation.

10 Years

10d

A brief discussion on how to read the chart.  The x-axis has two dates.  The upper is the starting date for a period, the lower is the ending date–in this case, 10 years later.

Consider the slice of the chart that begins with 1989 and ends with 1999.  I’ve boxed it in red below:

10dd

The purple, the Total Return EPS growth, was roughly on par with the historical average, around 6%. What this means is that from 1989 to 1999, the sum of the return from dividend reinvestments (or hypothetical share buybacks–same thing) at fair value and the return from regular EPS growth amounted to 6% per year.

The orange, the Contribution from the Change in P/E ratio, was enormously positive, adding more than 10% to the return.  Of course, that’s consistent with what we remember. In 1989, valuations were reasonable; in 1999, they were in a bubble.  The transition from normal valuations to bubble valuations produced phenomenal returns for shareholders. In hindsight, of course, nothing was actually “produced”–returns were simply pulled forward from the future, stolen from those that bought in at the end.

The blue, the Contribution from Interim Deviations from Fair Value, was actually negative, subtracting approximately 1% from the return.  This also checks with what we remember. From 1989 to 1999, valuations were substantially above average.  There were only a few very mild corrections that took place–certainly nothing resembling a crash.  For the most part, the market just went straight up.  The above average valuations depressed the return from reinvested dividends relative to the alternative of a market at fair value (which is what Total Return EPS is indexed to).

Notice that as we move to the right in the chart, towards starting dates in the early 1990s, the Contribution from Interim Deviations from Fair Value gets even more negative, approaching -2% per year.  To understand why, recall that the market in the late 1980s and early 1990s was actually valued fairly attractively.  When we move to the right, those years drop out, and get replaced by the acute phase of the tech bubble, when the market was radically expensive.

The thin black line is the actual total return, which almost exactly equals, within a few bps, the sum of the contributors, as it should.  Note that I’m calculating the actual total return not by summing the contributors, but by building an entirely separate total return index, using the normal methods for doing so.  The chart can therefore be taken as empirical proof that the decomposition is analytically correct–the numbers, calculated by separate methods, add up perfectly, as they should.

The chart of the decomposition of Total Return EPS, shown below, follows the same structure:

10c

The chief thing to notice in the Total Return EPS chart is how the mix of the return has shifted from green (reinvested dividends, or alternatively, hypothetical share buybacks) to yellow (EPS growth).  This shift will become more clear and compelling as we move to longer time horizons, where the interfering cyclical noise will get smoothed out.

20 Years

20d

20c

30 Years

30d

30c

40 Years

40d

40c

50 Years

50d

50c

60 Years

60d

60c

70 Years

70d

70c

Conclusions

On longer time horizons, we see certain patterns crystallize.  The Total Return EPS, shown in purple, converges on a trend growth rate slightly below 6% annualized.  The green–the dividend (buyback) return–shrinks, while the yellow–the return from regular EPS growth–expands, keeping the sum of the two–Total Return EPS growth–on trend.

The shift from green to yellow is the shift in corporate preference visualized–away from dividends and towards growth.  When we try to conduct trend analysis on regular EPS–the yellow–we inevitably miss this shift, and therefore arrive at faulty conclusions.  What we need to analyze instead is the black line, the sum, the Total Return EPS, which has held to its trend comparatively well over the long-term.

In earlier periods of the charts, frequent market cheapness contributed meaningfully to the return.  But over time, as the market has become more efficient, less prone to violent downturns and crashes, that contribution has faded.  Notice that the blue–the contribution from interim deviations from fair value–is much thinner now than it used to be.  In charts of shorter time horizons (10 or 20 years, for example), it has even gone negative.  The shift to a negative contribution reflects the secular increase that has occurred in the market’s valuation, the valuation at which dividends are reinvested.  If, going forward, the market successfully holds steady at its currently elevated valuation, successfully avoiding the pull of downturns and crashes, then the blue will stay negative, and total returns will underperform the historical average accordingly–simply by that mechanism, never mind the others.

Investors need to understand that they can’t have it both ways: they will have to either accept historical levels of volatility, which will allow them to reinvest their dividends at cheap prices every so often (and allow their CEOs to buy back shares and acquire companies at those same prices), or they will have to accept lower than normal historical returns.  The growing corporate preference for buybacks (and acquisitions, and mergers) as a low-risk, tax-efficient alternative to risky capital expenditure will only exacerbate this impact.

At present, nearly 100% of current S&P 500 EPS is being used to fund dividends and buybacks–a trend that looks set to continue.  Going forward, interim valuations–which will influence the returns that those dividends and buybacks produce–are therefore likely to be even more impactful than they were in the past.  If valuations remain where they currently are–at levels that would qualify as historically expensive even on the uncertain assumption that profitability will remain at record highs–future returns are likely to suffer accordingly.

Posted in Uncategorized | Comments Off on Using Total Return EPS to Decompose Historical S&P 500 Performance: Charts from 1871 to 2015