R2Gs ranking

I was playing with R2Gs models list and once again noticed strange behavior, not saying this is error at the moment.

Keeping all the filters constant and just changing “Model status” filter I am getting different, sometimes VERY different R2Gs sorting order by my rank. For example for “All models” model 1 is #5 in the list and model 2 is #10. Changing filter to Open models only model 1 goes to #12 and model 2 goes to #8. I mean not only changing position in a list from the top, but changing an ORDER too, sometimes dramatically.

Is it the way ranking should work? Personally expected no changing in list ORDER.

What rank criteria are you using?

I saw this effect in different rank criteria combinations. For example, right now I took the fully reset filter, and modified Out-of-sample profile (see attached).

Model status: All models
10 Sector Specialist: Healthcare
11 TWY 5 Stks Mktcap>$100M Liquidity>$1M VIX Mkt Timed
16 Alpha Max - Small Cap 100Mil < MktCap < 2Bil Good Liquidity 63% Win Rate
22 Keating’s 20 stk med turnover (mktcap>$50m)
43 Aggressive Value

Model status: Full models
3 Keating’s 20 stk med turnover (mktcap>$50m)
4 Sector Specialist: Healthcare
5 Alpha Max - Small Cap 100Mil < MktCap < 2Bil Good Liquidity 63% Win Rate
6 Aggressive Value
7 TWY 5 Stks Mktcap>$100M Liquidity>$1M VIX Mkt Timed

This is what I see now. I saw this with other models, and with other criteria combo and for different Model status.


I think that I see what’s going on.

On P123, ranks are always against the universe, and you’re changing the universe. I realize that it may appear to be a trivial change in this case, but it’s not.

I made a spreadsheet and attached it to this post to illustrate this.

Column A started out as three character sequential tickers. (ABC, DEF,…XZA…XYZ) The spreadsheet was resorted, so now it’s just gibberish. :slight_smile:

The top of Columns B through F are weights, as if you had weighted your ranking criteria. Rows 3 through 28 of Columns B through F are just random numbers, 1 to 99.

Column H is the sum of weightings times “raw data” and Column I then percentile ranks Column H. The spreadsheet is sorted by Column I. Imagine that Column I is the “All Stocks” rank. (I’m assuming that the random number 1 to 99 is good enough as a rank, by the way. We actually deal with ties a little more elegantly, but I ignored it in this case.)

Column K is a coin flip. If it’s 1, it will include the company in the rankings that follow. If it’s a zero, it won’t.

Column V sums the survivor’s weighted ranks and then Column W ranks those raw sums.

You’ll notice that column W is in a different order than column I. You can recalculate all you want and you’ll see the rankings change each time. Remember, the only moving part in this spreadsheet is column K, and all that it’s doing is running a 50% change to toss each row.

This is why the ranks change: It’s not just the end result that’s being resorted, but all of the individual rank criteria.


Rank Example.xls (29.5 KB)

I thought about such a calculation magic, that’s why I not saying this is error. But is it logically OK? I’ve isolated two tickers in Column K just put 1 for both and started to recalculate. And tickers order changing as on P123. But is it OK that two models compared differently by order (not magnitude) because of OTHER models presence in the list?

Relative comparisons are part of the ranking process, and when you change the “relative to” part, you’ll also get different results.

Well, more specifically, changes to relative ranks in the way that we’re observing them here are part of a multi-variate system. If you are only using a single factor, this shouldn’t/won’t happen. (A single bunch of numbers should only order one way.)

I’m not even sure what an alternative method would look like. The only alternative that I think of off the top of my head is to make the criteria into literal on/off criteria, but eliminating potential systems from R2Gs isn’t really in anyone’s interest.

I do understand the math of the process, but I doubt it is correspond to what the user want to get. For sure this is probably the subjective part up to user preferences.

Personally I am puzzled with this way of ranking. Is it OK to rank against constantly changing universe? Now I am thinking that I would prefer to rank against constant universe of all the models. And accordingly would like to see rank result for filtered universe not from 100 through 0, but from the rank score it gets on the full universe. In addition this way of ranking seems very unstable in results. But this is my understanding of ranking.

Anyway thanks for explanation, this was useful anyway.

Sorry, I just wanted to make sure that you understood: In that spreadsheet, I was throwing out companies randomly to create random sub-universes to show how the same numbers could result in different rankings depending on how many and which companies were present in said sub-universes.

On the site, the determination of sub-set is not random, and is not constantly changing. If you’re ranking against open R2Gs, the only way for that universe to change is for an R2G that was on it yesterday to fill its last seat, in which case the situation has changed, but the sub-set itself has not.

That is OK for excel sheet. But it seems on the site process looks the same. Changing Model status filter it is constantly changing sub-setting, it is kind of randomizing. For example I am ranking against all the models with “All models” filter and getting one kind of models sorting to explore my subscribed/watched models how do they compare to other non-subscribed models. Then I am switching to Subscribed/Watch only models, to see how they rank inside my universe and see changed ordering of models. So it’s looks much like in your spreadsheet. And not the way I would expect.