secret sauce

I’ve been resisting doing this for a long time, but in the hopes that others will follow suit, I’m going to share the main ingredients of my secret sauce. There are a lot of other ingredients too, but these will give you a good start.

Ranking system, in no particular order:

Forward earnings yield: current year’s estimated EPS divided by price.

Operating income growth, most recent quarter compared to same quarter last year.

EPS growth, next quarter (estimated) compared to the same quarter last year, and most recent quarter compared to the same quarter last year.

Accruals: the ratio of net income minus cash flow from operations to total assets, with lower values better.

Market cap: the lower the better.

Sales acceleration, from P123’s basic growth ranking system.

Unlevered free cash flow (operating cash flow minus capex plus after-tax interest expense) to EV.

Gross profit to EV.

Sales stability: look for one-year and/or three-year sales growth to be close to the fiftieth to seventieth percentile for the sector, and for the absolute deviation of quarterly sales to be as close to zero as possible.

Sales yield: look at both the current year’s estimated sales and TTM sales divided by fully diluted market cap.

Share turnover: volume divided by float, the lower the better.

Sectors: rank them according to how well your factors work for each one (my highest ranks go to materials and staples and my lowest to financials; see below for exclusions).

Margins: look for high profit and gross margins compared to other companies in the same industry, especially gross margins.

Volume: the lower, the better, but it’s also important to favor stocks with recently increasing volume.

Sentiment: I use P123’s basic sentiment ranking system—it’s terrific.

Universe rules: don’t invest in stocks from countries with high corruption; avoid stocks with high dayslate numbers; don’t invest in MLPs, other limited partnerships, or royalty trusts; exclude utilities and REITs; and don’t buy or sell a stock if it has a stale statement.

The only buy and sell rules I use are based on rank position. Rank position is also the #1 consideration in rebalancing, but I also buy lower amounts of low-liquidity and/or high-transaction-cost stocks.

5 Likes

Thank you very much for sharing this. It is indeed very hard to share one’s secret sauce, but very much appreciated how you have shared with the community.

thank you

see further below the ranking system I use in my trading (Basically Olikea stuff).

Further more I make sure with my buy rules only to buy small caps with very low liquidity, but had a recent quater positive earnings
I never buy china stocks
The System I trade has 100 Positions, no stop loss.
Weekly rebalance, RankSell is below 60! So turnover is very low
Trading these kind of stocks I use GTC orders like brackets below price (3% below price when buying, 3% above price when selling) in order
to realize almost now to positive slippage.
I backtest with only 0.3% Slippage, since I know I have reached positive Slippage real time.
2018 starting good, up 5% :slight_smile:

vma(150)/vma(300) ema(50)/ema(100) avgdailytot(20)/avgdailytot(120) Sharpe(120,1) (mktcap + DbtTotQ - (CashPSQ * ShsOutMR)) / Eval(EBITDAq>0,EBITDAq,NA) (mktcap + DbtTotQ - (CashPSQ * ShsOutMR)) / Eval(FCFQ>0,FCFQ,NA) (mktcap + DbtTotQ - (CashPSQ * ShsOutMR)) / Eval(NextFYEPSMean>0,NextFYEPSMean*ShsOutMR,NA) FCFQ/AstTotQ SIRatio SI%Float
1 Like

Yuval, you’ve likely put in far more work this than I have, so not sure what else more I can add to work you’ve done, but here’s some comments:

  • I find short interest and low daily price volatility useful factors.
  • I struggled to find anything better than the default P123 sentiment so I use it. It even works well for small caps, which I wasn’t expecting given the lack of coverage, but it helps. I use it in combination with a few varying calcs utilizing earnings revision data. I’m a bit wary of sentiment and earnings estimates because of the economic changes in the analyst community, as well as it’s ability to be “gamed” since it’s know that sentiment changes drive stock price changes - but it’s in there at the moment. I do wonder if a hedge fund somewhere wouldn’t set up a few research houses and start issuing earnings and ratings revisions over time, and set up trades to harvest the investment’s community reliance on sentiment changes. If I had few billion that would seem a fairly obvious thing to do to move stock prices. I don’t get into the really tiny capitalization stuff yet, so not sure how that sentiment dynamic would change in that world, but I’d think it disadvantages companies in the rankings that don’t have analyst coverage.
  • In liquidity, I find a combination of both low $ volume and low share count useful. Not sure why the latter matters, but it surprised me a bit.
  • I couldn’t find much benefit from quality factors, but do include asset turnover and various margin measures, but in a conflicted sort of way that rewards stability. I have difficulty finding a place for the typical quality factor like the “return on” calcs I think most of us think about when we think quality. My pet theory is there’s high demand for high quality companies, so quality in the “return on” sense seems fairly efficiently priced based on forward returns. There seems less here than the discussions around quality would indicate imho.
  • for value, they mostly all seem to work, but I landed on those you mention: UFCF and GP, as well as FCF yield. In some situations I have found earnings yield to be valuable in combination with accruals. I did not end up using Price/Sales (or Sales yield as you call it). I think these value multiples historically go in and out of favor over time and it’s possible I’m just picking up on recent things that have been working. I listened to a respected investor (nameless because I’m not sure who - but I remember I thought highly of him from the discussion) saying he thought low interest rates have helped stocks with lots of free cashflow, and that rising rates might change that dynamic. I don’t understand the reasoning and don’t have enough theory to understand what’s behind that statement (unless maybe it’s about investor’s search for dividend yield maybe?), but I’ve seen long term studies that show value metrics can consistently have long periods of better or worse performance as they come in/out of favor, so I guess I consider that a suite of value factors may be advisable so long as the model knows that cheap is usually better than expensive, and that maybe I have to keep an eye on this one.
  • I haven’t been able to find a place for momentum or technicals. I’ve tried, and it’s straightforward to find things that work well independently, but so far whatever I test tends to disappear in the mix, or add undesirable variance/volatility.
  • I haven’t used Sales acceleration like you mention - possibly due to core elements in the ranking system that emphasize stability. (As we discussed, I added a factor related to stddev of quarterly sales stability that you mentioned in a post and it helped my model vs. the sales stability metrics I was using.)
  • I tried and was unable to incorporate debt, leverage, or financial strength (other than one model I worked on that as an intentional starting point sought out companies with very high debt levels). Many qualitative investors place high importance on these factors, but so far I’ve had great difficulty turning those discussions into quantitative rules. There’s probably something there, I just haven’t found it.
  • I think there’s likely some useful information in insider activity. When the P123 field for % insider ownership is repaired I expect to look into that more. What’s there currently makes me think there’s useful info there (and maybe counterintuitive re: insider sales), but it’s kindof messy to work with - and the % insider ownership seems important to gauge mgmt buy-in and magnitude of trade sizes seen. I’d expect this field might be very useful in small companies w/ limited analyst coverage. Lots of small cap investors speak of importance of having mgmt incentives aligned w/ shareholders via large chunks of insider ownership.

Anyhow, those are some thoughts - probably nothing you haven’t already looked into or considered - but I just wanted to contribute here. I understand folks wanting to keep most of the good secret sauce stuff private though. You work hard to get some edge, and especially in the small companies you’re investing in you don’t want to create excess demand for the stocks you’re looking to buy. Ultimately though, anyone working through the data is probably going to find alot of the same things discussed above. It’s surprising to me that many of these advantages still persist given all the resources, brainpower and computing power available to wring efficiency into pricing. I’m still thinking as to why some of these opportunities currently exist or why they should continue to exist. It’s not at all apparent why imho.

I’m loving the tell all, folks.

I really cannot add a whole lot to the conversation regarding individuals factors. Most systems’ core factors are often very similar; the main differences in them seem to be how one measures the variables, and the level of sophistication in normalizing those variables. For example, most comprehensive systems will look at a profit to price ratio. On the simplest level, this is price to GAAP EPS. More sophisticated implementations will attempt to normalize earnings, normalize the comparison with peers, and/or create historical precedents for “normal”. Given my inability to add much to that conversation, I thought it might be more worth your while if I discussed a few findings which I think are value added to asset price research:

A. Value Convergence vs. Price Divergence Theorems

A bottom up approach to valuation incorporates all known (public and private) information in order to identify a thing’s true value. The motivation to conduct this intricate approach is based on an idea which I call “Value Convergence Theorem” (VCT), which supposes that a thing converges with its underlying intrinsic value as markets digest known information. Implicit in this idea is the slow churn of the price discovery process; it takes time for markets to properly interpret what is known. In this way, VCT is consistent with the weak form of the EMH which does not contradict the idea that one may identify profitable investment opportunities through fundamental research. A DCF is the standard method by which to evaluate intrinsic value. VCT is typically the realm of investors.

A top down approach to valuation is based on the “Price Deviation Theorem” (PDT), or the idea that price temporarily differs from value when new information has not been incorporated or due to market disequilbria. That value already equals price presupposes that markets incorporate known (public) information in a fairly rapid and efficient manner; this is most consistent with the semi-strong form of the EMH. Thus, profitable investment opportunities are possible to those who can most rapidly digest and respond to new public information (or those who have access to privileged private information). Traders, who tend to be proponents of PDT, will seek out unexpected changes in the baseline through news developments, gathering privileged or “whisper” information, by responding to emergent correlations, front running sentiment (i.e., “buy the rumor, sell the news”), or by acting as counter-party or liquidity provider to unusually large volumes of supply or demand (i.e., “fade the rally”). Due to the more rapid decision making cycle herein, the consequences of being wrong regarding the interpretation of new information for a single case are far less severe than for in-depth fundamental analyses.

Note that proponents of PDT are not necessarily opposed to VCT, yet realize that – since price usually equals value – it requires far less effort to simply react to new developments. Given that markets are even weak-form efficient, the level of effort required to conduct a bottom up valuation usually far exceeds the potential rewards since the most likely outcome (e.g., for reading 200 pages of the latest 10-k, 50 pages of the latest 10-Q, tens of thousands more words from news releases and analyst reports, and countless hours of modeling) is a failure to find a robust discrepancy between price and value. As a fundamentalist, I can attest that it takes a lot of research to gain some intuition about the information which is versus is not incorporated into a stock’s price currently and over time.

I think both camps have merit. In reality, the price discovery process incorporates market participants who are motivated by any and all combinations of the value convergence and price divergence theorems. Thus it makes sense for investors to look at stock prices from both perspectives in order to address the potential for a price-value disconnect. Originally, I fell strongly into the VCT camp after reading Graham and Graham-Dodd. However, I have learned from experience that deep dive value investing is a painful and usually unrewarding process.

B. Individual stock price momentum is a result of industry pressures.

I.e., normalized for peer co-movement, stock price momentum does not exist (i.e., has no ability to anticipate future equity returns). That documented momentum anomalies are actually due to external factors should be much more palatable to proponents of EMH, who since the 1990s have been baffled by its mere existence. It is not nearly as problematic to have a peer group move together for an extended period of time due to cyclical or secular pressures.

Note, however, that EMH theorists never seemed to have a problem with mean reversion.

C. Inefficiency is a prime candidate for exploring and exploiting factor interaction

Inefficiency amplifies the magnitude of factor signals. Low normalized analyst coverage, volume/liquidity, institutional ownership, and short interest are indicative that the market under-appreciates known factors. Normalized in this sense adjusts the amount of coverage for the size of the company, the number of share outstanding, and amount of liquidity.

For example, if a stock which appears to be “cheap” also has lots of analyst coverage, lots of institutional ownership, and heavy short interest then said stock is most likely a value trap. Likewise, if those same factor which lent the appearance of cheapness were to be present in an under-followed company, then the likelihood of inefficient pricing is much higher.

Note the interaction between these variables; weighted factorization (as is the norm) ignores the interaction since the inefficiency itself is neither bullish or bearish on its own.


Obviously, the EMH plays a big part in my thinking. I know it’s anathema to some of you (erhem… Yuval), but for me it’s the gorilla in the corner of the room… 1000 lbs and still growing. I once thought that some day the move to passive investing will allow more inefficiency to creep back in. But now that I think about it again, I definitely believe that that efficiency gains in active investing due to data proliferation and AI will continue to erode the amount of alpha available to peons such as myself.

1 Like

Me too.

I love quality factors, but I do think one has to focus on those that de-emphasize earnings for the reason you mention.

Yes, it’s best to keep technicals as added spices to your secret sauce. They’re not essential.

Well, by using EV-based value ratios, you ARE incorporating debt, leverage, and financial strength into your models! The problem with most of the conventional formulae is that they don’t take into account the fact that the cost of equity is higher than the cost of debt. Enterprise Value considers both, and also takes into account that the cost of cash is zero.

It’s easy. Most investors are looking for shortcuts. They don’t want to put in the hard work of coming up with and finding new reasons why companies fail or succeed. So they rely on factors that are tried and true. Or they follow the herd and buy Yelp and Netflix–or Riot Blockchain and Tesla.

Yuval, thanks for sharing. Here is a thread along the same lines (for those who may have missed it)https://www.portfolio123.com/mvnforum/viewthread_thread,10703#59646

Thanks for sharing everyone. I’m fairly new at this and I mostly look at the CAD market but some of the things I’ve found useful is putting in conditional orders as a lot of companies have NA values. I try to pick something something similar to rank them on. However I wish there was an option of just discarding that factor and normalizing the others for such companies…

Another thing I’ve found useful is comparing capex to cashflow and sales and punishing companies that have poor ratios but this may only work in a materials-rich index like canada. It helps to use multiyear averages as well.

Nisser,

Maybe something like this might work:

eval(FactorB = NA, NodeRank("FactorA"), FactorB)

But that would just give twice the weight to the other factor, no? Ideally I’d want it to just use the value of that factor and spread it among the other ones, i.e. exactly what the normalize button does.

Isn’t that what setting NAs to neutral in the ranking system does?

That gives it an average rank, which is not the same as discounting it altogether. That would also screw up some of my other factors that use the eval function. As an example it would rank companies with low FCF lower than ones that don’t have free cash at all, which is not what I’d want.

Marko,
I usually use a universe that I have attempted to “clean up” somewhat, filtering out companies with specific values
to be ranked that are NA because the ranking system works better, hopefully, with them removed. I cannot guarantee that is the prudent thing to do, but it is one way to consider.

You can also try NAs negative, or using an Eval statement to apply a bottom of the list value in place of NA. In the end, you will need to assure yourself that the approach used is logical and appropriate. And expect that some good candidates will be dropped.

I see where you are going with this. I have the same feeling about many of my ranking systems. Why not just throw out the NAs and renormalize the ranking systems weights? …seems more sensical than the giving NAs an “average” weight.

As a workaround, you could nest the noderank(s) under a conditional rank. The noderank would reference a composite rank of many factors. For example, a ranking system which partitions analyst factor from other fundamental factors might invoke a conditional rule only to use analyst factors if/when there is analyst coverage. Otherwise (if no analyst coverage), the noderank(s) would spread the weights of the ranking among other fundamental factors.

A similar schema could be used for stocks with and without CompuStat coverage (e.g., in order to look at technicals only).

This potential workaround might have the intended effect of discounting the NAs while “renormalizing” the weights among previous nodes.

Why not create a feature request for a “re-normalize” ranking method which acts as you describe?

Thanks to all contributors to this thread. It’s really difficult to find people willing to share their goog idea or results. Hope all this will help people like me struggling to gain some money going up and down on the “Russian montain”. I have to digest all this, but for now thanks in advance. A particular thanks to yuvaltaylor who started this thread.
I’d also be plased to see the graph of some your best live PF in the last (eg) 5 years, just to see how is high the montain I have to climb. :slight_smile:

I haven’t been doing this that long (been on the site less than a year), so take all my comments with a significant and unproven grain of salt. Consider what I’ve said as my learnings after about 7 months or so of study. I am investing live money in some of the system now though, so my fingers are crossed it holds up :wink:

Hey primus, just wanted to say I really appreciated this insight. Good thoughts here and interesting/different ways of thinking about it instead of just small/illiquid.

I have an employer retirement broker account that let’s me choose my own ETF and stocks to invest, but is very restrictive on trading. I’m only allowed a few trades a year. So I’ve just been buying $VMOT, which is Wes Gray/Alpha Architect’s value momentum ETF, essentially aggregating their four International Momentum, International Value, US Value, US Momentum Quantitative ETFs under one wrapper. This is essentially the closest I can come to replicating what I’m doing in P123 into one commercial 3rd party product that I can buy and hold without trading. I’m familiar enough with Wes Gray’s work and approach, as he’s written several books and is fairly transparent, but of course his exact methodology is proprietary. My own P123 books are a combination of value and momentum based ports. It’s uncanny that my P123 ports and the $VMOT almost exactly correlate. Some may go up/down more than the other, generally my P123 book does better than $VMOT …maybe because I’m able to buy smaller caps and trade more often, but they generally always go up or down together even at odds with the overall market. Value Momentum does it’s own thing, and often doesn’t reflect what the broader market is doing. It’s just really striking how so many of us just all skinning the same value and momentum cat in different ways, and ultimately success will ultimately be driven by who is able to stick with it over the longhaul and who is able to most reduce friction costs of slippage and taxes. That’s my only .02 cents.

ImanRoshi,

$VMOT, which is Wes Gray/Alpha Architect’s value momentum ETF, essentially aggregating their four

  1. International Momentum,
  2. International Value,
  3. US Value,
  4. US Momentum Quantitative
    ETFs under one wrapper.

Please, provide the ETFs symbol for their above four classes.

Thank you for sharing knowledge and help.
Kumar :sunglasses:

I got it;

Thanks
Kumar :slight_smile: