New "Override Rank Method" feature

Our main method for multi-factor ranking is by percentiles: sorting based on an ascending or descending settings, assigning a rank to each factor and combining using desired weights. This is a simple, powerful way to do away with a lot of complexity if a more statistical method is used, like z-scores (we have a Rank Method Normal Distribution but it’s still experimental, see end of post). Outliers are not such a big deal with percentile ranking. What is important is how NA’s are treated. We treat NA’s in two different ways as described in this document:

www.portfolio123.com/doc/RankingNAs.pdf

In this new release we made switching method of ranking a simple task. A Ranking System still needs to have a default way that ranking is done, but it can be overridden to quickly gauge the differences where it matters: in sims, rank performances, screen backtests etc.Depending on your systems some might perform better if NA’s are penalized, others migth do better if NA’s get a neutral rank.

Using this new feature I quickly discovered that The Chaikin Model, which was first designed using “NA’s have neutral impact” , went from an annualized return of 20% to 25% simply by changing the way NA’s are handled. You now see two ports in the P123 models:

A - Chaikin Indicators using NA’s negative impact
Annual: 25%
Sharpe: 0.71

B - Chaikin Indicators using NA’s neutral impact
Annual: 20.8%
Sharpe: 0.56

They both use the same ranking system ‘Chaiking Indicators’ which has the default set to NA’s negative impact. The B port overrides this setting to neutral impact in the Ranking & Universe section.

The difference is quite impressive. Hopefully I’ll have more on theories why this is so.

Marco

PS. we will revisit the “Normal Distribution” ranking method soon. There are several things that need to be updated, like data transformations, trimming, etc.

I’ve re-run another P123 model, the Stanford contest model Value Sentimentum, that shows startling differences using different ranking methods:

Value Sentimentum - NA’s neg impact:
Annual return: 13.6%
Sharpe: 0.32

Value Sentimentum - NA’s neutral impact:
Annual return: 22.1%
Sharpe: 0.64

Given these results, this is an area that warrants further studying. Let us know what you find using the new, easy way to override a ranking system ranking method. At the moment I only use empirical test to chose one method over the other, which could just another case of curve-fitting.

So I pose this question: why NA’s negative impact works better for the Chaikin model, while the Stanford model improves with NA’s neutral impact? We’re talking about an annual difference of 5-8% !

Marco

Marco,

Until last week I too had been getting excellent results by employing NA’s-Neutral ranking systems — very similar to the Value Sentimentum observation you make, above(I’ve added Drawdown & Sortino):

Value Sentimentum - NA’s neutral impact:
Annual return: 22.1%
Sharpe: 0.64
Sortino: 0.84
Alpha: 19.71
Drawdown: -49.64%

But, sometime in the middle of last week (Aug 21-25) several of my models deteriorated significantly, so I thought I would re-run your Value Sentimentum model (to see if the changes were system-wide, or just a product of my model design).

Here’s the current snapshot for Value Sentimentum (NA-Neutral)…I changed nothing in the model:

Value Sentimentum - NA’s neutral impact:
Annual return: 20.07%
Sharpe: 0.57
Sortino: .76
Alpha: 17.97
Drawdown: -59.41%

It’s not a massive difference, but there is a drop across all statistics with the change in drawdown being significant.

I am guessing that all of the NA-Neutral models deteriorated due to some data changes that were made, but can’t know for sure without your input.

Is it likely that the changes that you described in the following post led to the lower performance?

Thread: Problem with “fallback” during pre-announcements

Any insight you could provide would help quite a bit.

Thanks!

Yes, most likely the cause. I found another small problem with CompleteStmt, and therefore what determines final data. The fix will go live tonigh, and I’ll post details soon. I’m not sure how to explain your differences. My main concern right now is to make sure we do everything right. Let me know if the fix tonight affect you. Thanks

For the full period available at P123, my main sim did marginally better with negative impact than neutral impact, though I wouldn’t read too much into the small difference.

Interestingly, it did better with neutral during the first part of the period and then negative took the lead in the latter part.

I’m sticking with negative impact, not so much because it slightly outperformed but because if the valuation related nodes in my model all come out NA, I don’t want the model to rely exclusively on the one remaining node.

Is there any way to get neutral impact on a node-by-node basis? I could maybe use that.

Al

The neutral ranking for N/A’s is essential for running short rankers, otherwise there is no easy way to tell which bottom bins should be used to make the stock selections. Paradoxically, though, I have been unable to verify this since my best short sims are performing worse after the Comstat cutover, even with the N/A handling changes, due to Comstat data and rule changes-