Thoughts on alpha decay and obsolesce of simple systems

I’ve been scratching my head over a few blog posts on the Quandl Blog on alpha-decay. I commented on these posts, “Alternative Data – The Newest Trend in Financial Data” and “The Unbearable Transience of Alpha”, in a recent blog post.

Specifically, one line best encapsulates my concern: “Professional allocators will not pay hedge fund fees for the execution of strategies that are on the first year curriculum of any Masters of Finance program.” It makes a lot of sense that if it were easy, anyone could do it. But, according to the efficient market hypothesis, if everyone does it, then “it” ceases to be. Basically, Tammer would suggest that people who are still using things that everyone knows about and does (i.e., P123 users who rely on fundamental data and tired old value investing formulas) are basically guaranteeing their own mediocrity. He propose “alternative data” as one way of maintaining an edge.

This idea is not all that “fringy”:
Bill Miller recently contended that conventional value screening methods increasingly return value traps (i.e., low valuation ratios that deserve to be low) on episode 117 of the Investor’s Podcast. Howard Marks recently remarked that typical investors will achieve typical performance – he also stated that to be different, one must think differently – in a recent interview with Barry Ritholtz.

I’ve accepted the idea that the alpha content of fundamental and other conventional data is decaying. As a result, I should rely more heavily on recent performance (vis-a-vis, Bayesian inference) to infer future performance, but not so much lest I forget the past. I’ve also accepted that I should make my personal investing system as unique and as “irreplicable” as possible. But I still can’t shake the notion that maybe I am decade or so late to the game – that despite all my best efforts, future performance will be mediocre because I am using data that everyone knows about and has access to. Still, I remain hopeful that the somewhat more proprietary nature of Compustat’s Financial Statement Balancing System is more resilient to alpha decay than commodity fundamental data.

While the alpha-decay hypothesis does make sense, it contrasts with one of my basic philosophies: Keep It Simple, Stupid (KISS). KISS is crude way of saying that elegance and simplicity almost always beat complexity and opacity. Simplicity works because it simple things are easy to understand – it decrease the likelihood of over-fitting and conflation. But how should one reconcile the alpha-decay thesis with the KISS philosophy?

Each time I attempt to answer this question, I just come up with more questions, each iteratively more difficult to answer:

Does it really mean that complicated systems are less susceptible to alpha-decay due to their difficulties to replicate and low likelihoods that they will be replicated?

How much more efficient is the market now as a result of the proliferation of algorithmic trading than it was in the past? How much more efficient can the market become?

At what point does active investing no longer pay off?

On the other hand, will the mega-trend towards passive management preserve or even grow market inefficiencies in the future?

Is it possible to achieve a golden balance between simplicity with uniqueness? One that balances ideas which are understood to be fundamentally sound with a complex implementation… one that is both technically within my ability to automate and yet is still resilient to the forces which are currently assaulting the moat of my alpha.

I’d would love your thoughts…

Excelent questions and no easy answers. After all, the same considerations which make it necessary to be original in investing ideas and strategies, make it necessary to be original in answering those “meta questions”.

Interesting questions:

I actually wonder if the proliferation of fancy algorithms is actually making it easier for simple strategies to work. After all, what self-respecting MBA/MBS/PhD in who knows what is going to feel comfortable or satisfied doing something so embarrassingly simplistic as paying a good price for shares of a good company.

Ditto for technological improvement. Data flies around faster and faster so that now, a day’s holding period can, in many cases, seem very long term. But the speed of human behavior is essentially unchanged. Customer buy when they buy, not when the fastest data transmission systems wish they would. Humans evaluate what they get from the field at human speed even if the raw info is sitting there longer waiting to be digested. And investors still understand and interpret at human speed. Little wonder, then, that I often find weekly rebalancing to be the least effective and don’t use it absent a specific investment case designed specifically to work with noise.

Complexity for the sake of showing off is a prescription for trouble. But so, too, is simplicity for the sake of simplicity. I suggest an agnostic attitude here: Make the model as complex or simple as the logic of your ideas mandates. In my opinion, the key will be what it takes to truly express your idea, to get rid of situations that pass the letter of the law but violate its spirit. When I get complex, that’s always the reason for it. And you even mentioned an example, the classic value trap. Simple = low p/e. Complex is low P/E for shares of a company that don’t deserve low p/es because the business is better than Mr. Market realizes.

Be careful about use of fancy terminology that merely covers up for the fact that someone doesn’t really understand finance. The notion of a decaying value alpha is a good idea. Nobody wo truly understands financial theory would think for a moment to buy a stock just because its value metrics are low. (I think the “value trap” expression may be older than I am.)

As to recent vs. long ago testing, I’ve often spoken in favor of the weight to be given to recent data because the external environment always evolves. I can’t imagine any model being a forever thing. Financial theory is what it is, but the world in which it plays out changes and we should always be alert to the need to adapt. One example: I don’t use estimates data nearly as much as I did 15 or so years ago because the Sell Side has experienced heavy structural degradation since then. That’s not to say I don’t use it at all; my Cherrypicking the Blue Chips Designer model uses it and still looks good. But I’m aware of the fact that it’s at least middle aged by model standards (live since 4/13) and continue to watch for inevitable senior moments, although so far so good (that it focuses on the sp500, the most heavily followed universe, the one where the Sell Side works hardest, may be contributing to a longer life span).

yorama,

Thanks for your thoughts.

Marc,

I appreciate your insights and willingness to share. I really liked the fact that you point out that sell-side estimates work better for highly followed companies. I’ve found earnings estimates to be highly predictive for large, integrated oil companies – but absolutely worthless for smaller independent oil and gas companies.

Interestingly, I’ve also seen how size (as a proxy for coverage?) inverts some of my core beliefs about markets. For example, I’ve found that momentum (52 week total return) works better for large cap stocks than it does for small caps – in the microcaps, the relation is actually inverted!!! I often wonder if there are hidden correlations between size and other predictive measures that get lost in the mix of equally weighting positions. Moreover, I wonder how some of my models would look if positions were capitalization or score weighted.

[quote]
Does it really mean that complicated systems are less susceptible to alpha-decay due to their difficulties to replicate and low likelihoods that they will be replicated?
[/quote]Makes sense to me. The rolling outperformance of “simple” value vs. blend has dropped over the past fifty years as you can see on portfoliovisualizer.com[quote]
How much more efficient is the market now as a result of the proliferation of algorithmic trading than it was in the past? How much more efficient can[will] the market become?
[/quote]I don’t really care about the theoretical market efficiency and I assume that you don’t either. We care about making money. So allow me the liberty of rewording the question as follows:
Q. Will I be able to beat the market using a given system?
A. It depends on how much money flows into the system, and how much it can handle. A microcap, highly focused, high turnover trading system will degrade in performance much faster and to a much greater degree than a diversified, low-turnover, mega cap ETF system.[quote]
At what point does active investing no longer pay off?
[/quote]Yes. When if it no longer pays off. For most investors, this point has been reached a long time ago.[quote]
On the other hand, will the mega-trend towards passive management preserve or even grow market inefficiencies in the future?
[/quote]My guess? It depends.

Well designed index funds (which have no perceptible trading slippage) have zero effect on market efficiency, but a negative effect on market liquidity; as long as there is no panic buying or selling.

To give you an example, if 75% of the stock market would be passively indexed, then trading volume would go down by 75%. But the index funds would have zero effect on prices as long as thes funds don’t buy or sell. But during a market crash, as some index fund holders panic, it creates selling pressure on the market overall and all stocks will go down even if there were to be an issue only with one sector.

The thing is, though, that many passive ETFs are actively traded. This trading causes inefficiencies. An entire industry may go up just because of a single positive earnings surprise in the industry; even if the other stocks are not affected

While some active strategies may cease to exist, it depends on the economic rationale for why the strategy exists in the first place, and not necessarily complexity versus simplicity.

Some strategies are compensation for bearing risk; the equity risk premium is certainly well known, and has not disappeared, nor will it.

Other strategies are due to behavioral biases, which should continue to work as long as human beings direct investment decisions, and we’re still a long way off from robots taking over the world.

Arbitrage or informational advantages are probably the most at risk to cease working, eventually.

Einstein said it best:

“Everything must be made as simple as possible, but no simpler.”

Las Vegas was build on a tiny house edge and continues to make millions every year although every player knows about it. The “house edge” of a good P123 system is significantly higher. And as always in life the biggest risk is not taking any risk at all.

Chipper6,

Thanks, as always, for your input – always appreciated. Also, thanks for the link on https://www.portfoliovisualizer.com. It looks like they perfected Modern Portfolio Theory.

Quadz42,

If you don’t mind, I might try extrapolate on your words:

Strategies that simply compensate for risk are classically defined as “beta”. Are strategies that exploit behavioral biases (due to perceived risk) just another form of beta? What about alpha? Is alpha – the idea that one can predict – just a fanciful and fleeting phenomenon? If so, then it seems preferable to stick with exploitation of perceived risk which, when adding dollars and cents, is indifferentiable from alpha.

<< Basically, Tammer would suggest that people who are still using things that everyone knows about and does (i.e., P123 users who rely on fundamental data and tired old value investing formulas) are basically guaranteeing their own mediocrity.>>

Principles of value investing have been known for decades and yet it still continues to perform:
http://gdsinvestments.com/wp-content/uploads/2015/07/The-Superinvestors-of-Graham-and-Doddsville-by-Warren-Buffett.pdf

Even Fama and French think there is a “value” factor.

When there are no more bargains in the market, then I’ll suspect the end has approached.

Till then… See ya with my “simple” value approach.

<<Principles of value investing have been known for decades and yet it still continues to perform:
http://gdsinvestments.com/wp-content/uploads/...lle-by-Warren-Buffett.pdf >>

I just re-read the beginning of Buffet’s article. He literally begins by asking “Is value investing out of date?”

I don’t know when Buffet wrote the article but it seems that every decade or so people debate this question.

As long as there are SNAPs, TSLAs and short-termist greed, I think value investing is safe.

Great article, AnlamK. I think the analogy to epidemiology is relevant: if seemingly randomly selected people get a rare flu strain, it’s endemic. But if, upon further inspection, most of these people visited the same town, it’s epidemic. Over the last 50 or so year, most of the best investors went to the same schools, had the same mentors, and thought similarly about investing. It can’t be dumb luck. As for the others, it is impossible to distinguish skill from luck.

It so appears that all naysayers who have “cried wolf” – declared the end of value – have been proven wrong. But I will caveat this: "That the alarmists have regularly and mistakenly cried “wolf!” does not a priori imply that the woods are safe”. (Neumayer 2000).

Agreed. Most in the industry would consider style factors “smart beta”. If people openly discuss these factors (e.g., value, momentum, quality) in forums such as P123, then they are not secrets.

But for most investors, simply building a diversified portfolio of well-understood beta exposures, and sticking with it, would work well.

Seeking true alpha can still be worth it, but it is difficult, often expensive, and takes time and expertise to monitor.

It looks like all the money in small cap value funds has eliminated its edge over small cap blend since 2006.


Five year rolling outperformance.png

I’m coming a little late to this forum, but here are my two cents.

I’ve done dozens of correlation studies of various strategies, and I’ve come to the conclusion that recent performance (under five or six years) is not a very good indicator of future performance. Ten years is optimal. I’ve also concluded that backtesting should be done with a larger basket of stocks than you’re actually going to invest in, and that looking not just at returns but at their variability and volatility is a good thing.

I agree that investing systems should be unique. And P123 users do have a much better data set than anyone else.

That’s an easy one: throw out the KISS philosophy. Every great investor in history has tried to look at possible investments from as many angles as possible. That’s the only way to make money in this game. I personally use about thirty different ratios when evaluating my investments. You probably need to consider various growth, quality, sentiment, size, and value factors. You might want to consider sectors, how long it’s been since the last earnings statement, and/or technical indicators. As many things as possible that can affect the future price of a stock should be taken into account. Obviously, no system can take everything into account, but as long as you’re using factors that make financial sense, the more complex your system is, the safer your investments are, and the less likely you are to fall into a value trap.

Absolutely. Combine this with innovation. For example, ROE%TTM and EV/EBITDA are so overused that they offer no edge. EV/UFCF (unlevered free cash flow) is almost unused, and offers an edge; breaking ROE down into its components–asset turnover, profit margin, and assets to equity–gives one better results than using ROE itself. A factor I’d never considered before–the ratio of net operating assets to total assets–seems to work better than the (overused) current ratio when looking at a company’s balance sheet. And so on.

  • Yuval

People have shown that alpha decays. Some call in reversion-to-the mean. I think Yuval showed it to be about a 19% decay (toward-the-mean) or a correlation of about 0.81 in one of his posts–if I am not mistaken. That will not stop—whether we can find reasons for it or not.

I like Yuval’s study. Best would be a study that looked at in-sample and out of sample. Yuval will probably help us on that. I will only have to wait eight years. Until then, the most interesting thing for me was the clear quantification of the reversion-to-the-mean.

The only question is will it stop working the other way. Will the price of poorly performing companies (with low value ratios such as P/E) continue to revert-toward-the-mean in a positive way that we can take advantage of.

I think it could stop working as well as we like. Not completely, however. You might as well wonder if the second law of thermodynamics will stop woking.

@YuvaTaylor,

I totally agree. “I” should rationally make a system as complex as possible under only under the condition that “I” still understand it. (replace “I” with pronoun of choice).

Also, can you post a link to this mean reversion study which Jrinne had mentioned?

@Jrinne,

Physics are to natural laws as markets are to human behavior (right???). Whereas physical laws are assumed to be constant, one might assume that human behavior changes with respect to the information that is made available.

Final questions for the audience: Given that sharing of our own alpha contributes to our own alpha-decay, are their situations in which selective sharing within a closed system (i.e., a cabal) results in a net increase one’s alpha (i.e, increases information arbitrage faster than information diffusion)?

I’m afraid I’ve never actually studied the rate of alpha decay or the mean reversion of factors, and Jim may have misinterpreted one of my studies, which are mostly concerned with comparing performance of strategies across different time periods. Depending on the variables, the correlations vary wildly, but in general, I’ve found that using alpha gives slightly better results than using plain returns, and that even better results are often obtained when taking into account the standard deviation of alpha. And I’ve also found that the single most correlative factor between the same strategy over two time periods is beta, but that beta does not correlate with absolute returns unless you take alpha into account too.

Which is exactly the type of thing reversion-to-the-mean addresses.

Yuval: Whatever the intent of your study, there can be no question that it showed reversion-to-the-mean (or regression toward the mean).

Truth is there will always be reversion-to-the-mean for any correlation study that measures the same thing over 2 different time periods unless the correlation is exactly one. Period.

Fun example: it may be responsible for the Sports Illustrated Cover Jinx, as you probably know. Excellent performance over a time period deserving to be on the cover of Sport Illustrated will probably revert-to-the-mean over the second time period. The person on the cover tends to not do as well in the future.

Daniel Kahneman devotes a chapter to this in his book. His example regards the performance of pilots over 2 different time periods.

Yuval, I am not critical of any of your studies and in fact I continue to praise them. You can focus on whatever you want in your studies. And I will use different examples in the future, if you wish.

But no reversion-to-the-mean? Really?

@Primus. If you have a correlation involving a measurable metric of human behavior across different time periods, it will revert to the mean. The shortest miniskirts are likely to get a little longer the next year. Not guaranteed from one year to the next but they will not get shorter forever.

I cannot lose on this assertion if the data fits the assumptions required to do a regression or correlation in the first place.

The reason it is like the second law of thermodynamics is that it is in the math involving correlations. If you are betting on the direction of hem-lines it doesn’t hurt to talk to some fashion designers. But regression toward the mean occurs—generally–whether you have the inside information or not.

@All. I should not try to take credit for this idea—whatever you think of it. With regard to value ratios, I first heard the idea from Barry Ritholtz. But the idea of reversion-to-the-mean for stocks came before Barry Ritholtz. None of this is new and it is not my original idea.

-Jim

Jim -

Maybe we mean different things by reversion to the mean. I’m of the opinion that a strategy can outperform the market to different degrees over different time periods but in general outperform it, with, perhaps, a few exceptions now and then. That, to me, is not reversion to the mean. To me, reversion to the mean implies that a strategy that outperforms the market in one period will underperform the market in another, and over a very long period will be basically average.

To take one example, I don’t think there’s ever been a period of over ten years when buying the top 20% of the stocks in each industry in terms of price-to-sales wouldn’t have beat the market to some extent. Is there any reversion to the mean involved there?

  • Yuval