Small value underperforming

Small value, which has historically outperformed the market (see Fama and French 1993), has significantly underperformed the market for quite some time now. It has underperformed for the past three years, and since 2014, it has only outperformed in one year (i.e., 2016). This is quite surprising since it flies in the face of academic research.

Buying bad companies at good prices is also not a good idea, so I add quality to that small value mix in my own models and portfolio. My model has very good historical returns, but lately even that has underperformed significantly in the past year. There are bound to be bad years every so often.

I think it would be foolish to bet against good companies at good prices, especially now since they’ve been underperforming. But seeing as the market is near all-time highs with a good possibility of an economic slowdown in the next couple years, it is troubling to me to continue buying more with no results. I’m a stubborn believer in continually buying shares of a good company when the price continues to drop as I’m seeing.

What i’m curious about is whether I will see a turnaround in small value, or whether they will continue to underperform. What do you all think?

Do you have statistical evidence to demonstrate that small value has been underpoerforming? There are a few designer models on p123 which invest in small value stocks and they have done pretty well over the last years.

Seven,

The SP1500 Pure Value is flat over the last 2 years. And it has been in a DD since Sept 2018. I mention this because that is the benchmark for some of my ports as it has the highest correlation of all of the benchmarks I have tried (for these ports).

-Jim

For what it’s worth, I wrote an article called “The Problem with Small-Cap Value Indexes” about fifteen months ago, which you can read here: https://seekingalpha.com/article/4148763-problem-small-cap-value-indexes or here: https://backland.typepad.com/investigations/2018/02/the-problem-with-small-cap-value-indexes.html . . . Small caps in general have vastly underperformed large caps in the last year. But as for value, it depends how you define it. The way the value indexes define it is, in my opinion, all wrong, and the worst offender is Russell.

Same thing happened every year from 1995 through 1999. From 1/1/95 thru 12/31/99, OEX gained 270%, SPX gained 220% and the Russell gained 102%.

So this can last for awhile.

You’ll notice that the 5-year Russell underpeformance occurred right after Fama published his paper. Ouch.

QVAL, ETF managed by the authors of Quantitative Value, has underperformed since inception and is about 25% below its high of January 2018. But top holdings are large caps.

Are any factors styles increasing in alpha?

Great question.

There’s one factor, yes: dividend growth. The chart below was generated as follows: I graphed the difference between the one-year returns of the top quintile of the Russell 3000 in terms of Div%ChgA and the bottom quintile, measured monthly (using percentile N/As ranked neutral) and compared to other companies in the same sector. You’ll see a general upward trend in this factor from 2004 to now. Companies that are increasing their dividends are outperforming companies that are decreasing them by a good measure now, and that really wasn’t the case between 2004 and 2011.

I think it will be very hard to find other factors that are increasing in alpha right now.


Thanks, Yuval. That’s very helpful.

Do you think it is possible for P123 to make a tool or dashboard to do something like this? I would like to rank factors by their time-series slopes and intercepts of excess returns. Excess returns may be defined as the difference between low-high quantiles, quantiles versus the benchmark, alpha, or even rolling Sharpe.

It’s not hard to do in Excel. Take a ranking system with a lot of factors and assign 100% to one. Save, then press “performance,” use 5 rank buckets, and under “chart type” check “performance.” You can then download the performance of each bucket. Let’s say you’re rebalancing every four weeks. You can create a new column on the right with the formula g27/g14-c27/c14 and copy that down. That will give you rolling one-year returns of the top bucket minus rolling one-year returns of the bottom bucket. The try another factor with 100% and see what that gives you. You can then use the =SLOPE and =INTERCEPT commands to get slopes and intercepts for various periods.

Please correct me if I’m wrong, but your chart doesn’t necessarily illustrate alpha. Alpha is a risk-adjusted performance measure against a benchmark. So you would need to consider both a benchmark and the holdings beta to determine alpha. In other words, the upward trend could disappear once risk is considered.

To plot alpha, I usually use the Strategy Rolling Test w/ the most appropriate benchmark I can find.

Walter

I do not think I was using Yuval’s spreadsheet download the way I should.

The data needs to be worked with a little and NOT in the way I was doing it.

Thanks Yuval. Munged correctly, this is a very useful tool!!!

I would delete the images if I could. Since I cannot delete it I want to mention this is Olikea’s Public Ranking System performance test.

-Jim




This is the income-seeking baby-boomer effect. Demography tells it should still work for a while.

Excellent point, and I apologize for making a flawed claim. - YT

This discussion has the potential to get at the hear of the frequentist vs recentist debate.

Do I want something that is statistically proven to work or would I prefer something that worked recently? Ideally, I get both. But this is the real world–concessions must be made.

It also gets back to my question as to whether P123 can produce tools to automate this analysis… because, more often that not, I produce rank histograms that have an attractive slope upward, only to find that the [sic] alpha has all but been depleted. E.g., lots of graphs like Yuval’s.

I am not so concerned how alpha is measured (I simply mean here as predictive power), but I am concerned by the fact that arbitrage mechanisms are working against me in real time. Of course I can manually download the returns data and do some analytics in an offline program, but I’d rather there be tools to assist with that process at the source.

Would really love to see this automated. I can’t remember where I used it before…Bloomberg or something.

5 buckets of returns. Then high minus low plotted on time series. You can select time frame and universe (SP 500, R1K, R2K and R3K would suffice)

This should be low-hanging fruit and easy to produce. But for factor research…absolutely essential. Do this for 300 factors…allow it to be run across a custom universe (perhaps a modified microcap universe) and you have absolute gold. It doesn’t take out the reasoning or logic from the process but it does quickly show ‘what’s working’ and ‘whats not’ and which factors are increasing in net returns and which are not.

My apologies for botching the data wrangling above so badly. But this can (pretty easily) be used to get a t-score too–as is done in this paper http://www.nber.org/2018LTAM/hou.pdf

Getting the t-score can be enlightening in light of what David is saying here.

I signed on to P123 in 2013 so I am making a habit of cross-validating what I do by optimizing my system from 2005 thru 2013 (including 2013).I then run the system out-of-sample from 2014 until now.

One ranking system does well up until 2013—with a t-score above 3–using the method in the paper. But 2014 until now it is not significant at all.

Anyway, this is meant to echo what David is saying. But there may be other things going on too. Maybe I just over-optimized the system. Maybe I put enough things together,–enough times–that I got something that looked significant by chance. Maybe it is David suggests: frequentist vs recentist. Heck, maybe it is an efficient market after all (going forward).

In any case, automated or not, this is a great tool for answering some of these questions (thanks Yuval). And David you have some interesting thoughts on this topic.

-Jim

This is an instance where P123 API framework, which could provide things like factor rank hi/low bucket differentials, would be an amazing feature. Whipping up my own personal factor performance dashboard in a tool like Microsoft PowerBI is negligible if I had direct access to the data, but it’s the constant manual simulation extractions to feed regularly updated data that is the painpoint for users. If this painpoint is eased, the user community could do a lot of the heavy lifting themselves and keep p123 staff bandwidth open for other projects.

https://www.portfolio123.com/mvnforum/viewthread_thread,11673#67310

This is definitely not “absolute gold,” in my opinion. One huge problem is that there are a huge number of factors for which the middle buckets perform the best and the difference between the first and fifth bucket will tell you nothing. The second huge problem is that subtraction (top bucket minus bottom bucket), as a mathematical operation, bears absolutely no resemblance to anything having to do with investing. If you annualize two returns and subtract one from another you’ll get a completely different result than if you don’t annualize them or if you “monthilize” them. When it comes to returns, subtraction is basically garbage. The only sensible way to get the result you want is to go short the bottom bucket and long the top bucket SIMULTANEOUSLY and measure the resultant returns (which will definitely be very different from subtracting one return from the other due to the interactions of the two portfolios). And even though this is “sensible,” it too has no basis in reality because it’s extremely impractical to short one-fifth of a universe, and nobody actually does. The third huge problem is that looking at which factors are increasing in net returns and which are not will not tell you a damn thing about which factors will work in the near future. I cannot imagine any reason why a measure of a factor working or not working, if you could come up with one, will follow a smooth curve rather than a sudden jump, or why the curve, if there was one, wouldn’t go up or down unexpectedly. The fourth huge problem is that factors NEVER perform in isolation, no matter what Fama and French might think. There’s no such thing as a factor that isn’t extremely connected to other factors.

I performed my little exercise for Primus’s benefit because I was curious about the question he asked, because I’ve noticed an improvement in dividend growth’s utility as a factor, and because the “top bucket minus the bottom bucket” is the way the Fama-French folks like to measure factor strength. But I shouldn’t have. I don’t believe in it at all–it was a purely “academic” exercise. EMH, CAPM, and the Fama-French five-factor model are all based on complete garbage math. How academics who should know better have gotten away with this crap for so long I can’t understand.

I’m not saying that performing bucket tests on single factors is useless. It can provide you with a lot of perspective. But you have to consider this performance as a whole (not just the top and bottom buckets) and as part of a whole (the vast interplay of factors over time).

What you’re asking is for Portfolio123 to interpret results for you. If you want to interpret the results using your Excel spreadsheets, be my guest. But I don’t think we should be in the business of doing it for you, no matter how “easy” it may be for us to do so.

I am sympathetic to Yuval’s general point on this. Although I might quibble with some of his preferences over which factors are best (or how to look at them).

We had literally been told that the secret lies in the Alpha then maybe in the Omega. “The Alpha and the Omega is the answer.” Pun intended. But everyone has their religion that should be respected.

Ultimately the API even if it requires an SP500 license is the answer.

In the mean time, to the extent possible, Excel downloads that can be manipulated are very helpful.

-Jim