Here is my problem with Microcaps

I am still struggling with whether to continue my pursuit of a microcap strategy. I have come to the conclusion that if I do, microcaps should only be a small % of my overall equity allocation, but I am leaning toward no allocation at all. Below is an example of why.

I start with a simple microcap universe with the below parameters:
1.) AvgDailyTot(60)>(100000)
2.) Universe(MasterLP)=0
3.) mktcap>50 and mktcap<500

I then borrowed Yuval’s public largecap ranking. I know it says largecap, but it seems to be a good example of a well thought out ranking system that looks at a company from multiple angles.

With this, I get the following screen backtest results for the last 10 years using a 4 week rebalance and 0.5% slippage. Looks great!

https://www.portfolio123.com/app/screen/summary/254195?mt=1

Then I simply change the universe to the standard P123 Micro Universe:

https://www.portfolio123.com/app/screen/summary/254194?st=1&mt=1

It then completely falls apart.

I have repeated this several well balanced ranking systems, with the same result

This leads me to believe that the dispersion of returns in the microcap space is just too great to rely on them. They are just too sensitive to small changes in parameters to be trusted for a significant portion of a portfolio (at least mine).

Just putting this out there to see what others think.



1 Like

I’m afraid the P123 “Micro Cap” universe only has 73 microcaps in it, if one goes by the conventional definition of a microcap as a stock between $50 and $300 million. The other 860 stocks are small caps and mid caps, if one goes by the conventional market-cap limits.

To understand the composition of the P123 “Micro Cap” universe, go to https://www.portfolio123.com/doc/doc_detail.jsp?factor=Universe and click on “Micro Cap.”

Back when this was first implemented, these universes produced the results shown in the table. But that was many, many years ago.

If I apply a filter to the screen of P123 Micro of Mktcap < 500M, I get around 600 stocks. This universe yields the below. I guess my point is there shouldn’t seemingly be that large a drop in performance picking a universe with slightly larger cap stocks:


charles123,
Try this.
Start with the public R7_Filip’s Super Value 76.0 RS.
Then read everything Yuval has ever written related to investing: blogs, forums posts, SA articles, etc…
Try every idea presented. Keep what works, discard what does not.

I haven’t looked at your screen so I don’t know of its design but I am going to add my two cents here. Most value-based metrics have a weakness that shows up at the time of earnings call. The company’s’ results are known to the public but the financial metrics haven’t percolated through the P123 database. This means that there is a transitory period where value metrics don’t make a lot of sense. A company that made a bad announcement will have its share price drop significantly but the new earnings or sales figure does not show up for a period of time, making the value metric look great during that time period. I mention this here in this thread because the database for largecaps is updated quite fast… a day or a few days, whereas smallcaps and microcaps can take weeks to be updated. This is why value/momentum is popular amongst smallcap investors. The momentum part of the strategy insures the value metric isn’t better than it should be due to falling share price.

I personally would like to see the great minds here at P123 to find a better solution to cover the gap between earnings call and when the database is updated. On a somewhat related note, I have my portfolios set to exclude preliminary results, yet when it comes to rebalancing I am forced to make decisions regarding incomplete statements. So I have a question for Yuval / P123: is there a way to arrange so that I don’t have incomplete statements in portfolio rebalancing and simulations?

Compustat has always taken quite a bit of time processing the announcements of microcaps and nanocaps, but such is not the case with FactSet. FactSet updates their data with figures from the latest announcement extraordinarily quickly. Ever since we made the switch, I haven’t had any of the long delays I used to get with Compustat.

As for rebalancing, I simply add “and StaleStmt = 0” to my sell rules and include “StaleStmt = 0” in my universe/buy rules. You could do something similar with CompleteStmt. I’m not sure if that answers your question–if not, try me again . . .

Thanks Yuval - What does it mean when preliminary results are excluded but the statement is incomplete? Is this another transitory condition? How long does it usually take for “incomplete” to be resolved?

CompleteStmt = 0 means that the company has announced its results but the data provider has not yet reported the results from the filing associated with that announcement. So it’ll show up after an earnings announcement and before we get the data provider’s version of the financials in the filing. It doesn’t change whether preliminary results are excluded or included.

Yuval - does this mean that partial statement updates do not occur? i.e. all or nothing? ‘Incomplete’ suggests that there is a partial updated but there are pieces missing…

For companies that make an announcement before filing, some fundamentals are updated, but according to the announcement rather than according to the filing. That’s the “prelim” that you’re excluding. Once the statement has been filed with the SEC, it would be rather unusual for some items to be updated but not others.

I think I understand now. The Incomplete flag signifies that prelim figures are available but the final results are not available. Which means that if I exclude preliminary then I will be looking at previous final results when the Incomplete flag is set. Is that right?

Yes, that’s right.