Index | Recent Threads | Who's Online | Search

Posts: 42    Pages: 5    Prev 1 2 3 4 5 Next
Last Post
New Thread
This topic has been viewed 1650 times and has 41 replies
yuvaltaylor
Re: Are we over fitting?

How does one average ranking systems together?


Pretty basic. Take the systems that work best and average the weights of each node. Some nodes might be missing in some systems, so those would get 0% weight in those systems and maybe 2% or 8% weight in others, and you just use the average.

Yuval Taylor
Product Manager, Portfolio123
invest(igations)
Any opinions or recommendations in this message are not opinions or recommendations of Portfolio123 Securities LLC.

Jan 11, 2022 11:16:05 AM       
Edit 1 times, last edit by yuvaltaylor at Jan 11, 2022 11:16:56 AM
abwillingham
Re: Are we over fitting?

Thanks Yuval. That is very interesting. Is that something you came up with on your own or did you learn that from someone else? I recall reading in one of your past articles about some backtesting methods you picked up from one of the popular investments books. I can't remember which one. Maybe O'Shaughnessy or O'neil.

Jan 11, 2022 1:09:42 PM       
rtelford
Re: Are we over fitting?

How does one average ranking systems together?


Pretty basic. Take the systems that work best and average the weights of each node. Some nodes might be missing in some systems, so those would get 0% weight in those systems and maybe 2% or 8% weight in others, and you just use the average.


Yuval, I know you've written about this in the past. Just curious though, say you take the full universe, and optimize factors/ranks for the full universe. Would the sim results be roughly the same compared to breaking up the universe into 4-5 sub-universes and respective optimized ranking systems, and their average? Then again, maybe its the consistency of factor weights across the multiple universes that shows more promise? I'd be curious to hear what your experience has been.

Thanks.

Ryan Telford -- also find me at:
Seeking Alpha
Twitter

Jan 11, 2022 1:40:32 PM       
yuvaltaylor
Re: Are we over fitting?

Thanks Yuval. That is very interesting. Is that something you came up with on your own or did you learn that from someone else? I recall reading in one of your past articles about some backtesting methods you picked up from one of the popular investments books. I can't remember which one. Maybe O'Shaughnessy or O'neil.


I don't remember. I think I came up with it myself. Apologies to whomever I stole it from if not.

O'Shaughnessy does this: "We run 100 randomly selected subperiods . . . For each of the 100 iterations of the bootstrap test, we first randomly select 50 percent of the possibly monthly dates in our backtest and discard the other 50 percent. We then randomly select 50 percent of the stocks available on each of those dates and discard the rest. This gives us just 25 percent of our original universe on which to run our decile analysis [bucket returns tests]. We do this 100 times for each factor and analyze the decile return spreads. . . . If we discovered that there were large inconsistencies in the bootstrapped data, we would have less confidence in the results and investigate whether there was any evidence of unintentional data mining inherent in the test."

I did take some inspiration from this . . .

Yuval Taylor
Product Manager, Portfolio123
invest(igations)
Any opinions or recommendations in this message are not opinions or recommendations of Portfolio123 Securities LLC.

Jan 11, 2022 11:35:51 PM       
yuvaltaylor
Re: Are we over fitting?

How does one average ranking systems together?


Pretty basic. Take the systems that work best and average the weights of each node. Some nodes might be missing in some systems, so those would get 0% weight in those systems and maybe 2% or 8% weight in others, and you just use the average.


Yuval, I know you've written about this in the past. Just curious though, say you take the full universe, and optimize factors/ranks for the full universe. Would the sim results be roughly the same compared to breaking up the universe into 4-5 sub-universes and respective optimized ranking systems, and their average? Then again, maybe its the consistency of factor weights across the multiple universes that shows more promise? I'd be curious to hear what your experience has been.

Thanks.


I think they'd be pretty similar. It helps me sleep at night to know that my strategy worked well not only on the main universe but on five random subuniverses. Maybe I'm subjecting myself to too much extra trouble. I don't know. I've been doing it this way for a while now.

So let's say you're testing on your whole universe but using five times the holdings you normally would. Do you optimize the weights of your ranking system by increments of less than 2% or 2.5%? That smacks to me of curve-fitting. If not, you'll never get more than 40 or 50 factors. (That is, unless you use composite nodes . . .)

If, on the other hand, you optimize on five different universes, you get a variety of different weights that you can average and you can get a lot more than 40 or 50 factors. If you're like me and you think the more factors the better, then that's a good thing. It also gives me a valuable perspective on how mutable my ranking system can be and how differently it can work for different groups of stocks.

Another thing I do is I look for statistical ties. You take all your results, find the standard deviation, multiply it by 1.96, and divide by the square root of the number of tests you did. Then you look at your top few results. If the difference between them is less than that number I just told you about, then they're statistically tied, and you can average not just the best, but the second best, third best, and so on.

(I don't remember how I came up with the 1.96 times part. I must have read it somewhere.)

What would certainly be different--and maybe better--than dividing your universe randomly would be to divide it by subindustry or size or something else and then take the average of the optimized systems. You'd get more variety that way. The key is to have relatively equally sized universes and not have stocks migrate from one to another.

The goal here is laid out in this article: https://blog.portfolio123.com/the-magic-of-co...gies-can-improve-results/

Yuval Taylor
Product Manager, Portfolio123
invest(igations)
Any opinions or recommendations in this message are not opinions or recommendations of Portfolio123 Securities LLC.

Jan 11, 2022 11:58:09 PM       
judgetrade
Re: Are we over fitting?

Long term OOS Performance and causation of the edge is one answer.

My best long term OOS models are small cap models.

They do well since 2011.

They do well because there is a reason for it. Big funds can not play this stuff and value momentum (Olikea ranking!) is one of the most stable factors our there especially in the small cap area.

Hard to trade, yes, regular 20% DDs and oc. 50% DDs.

A variation of that Olikea ranking system (adding industry momentum, accruals, some quality factors + EPS Estimates) is doing well out of sample since 2019.

All said, this is just the beginning. The best models will not help you, if you can not trade them, human factor is the much harder thing to master.

I am working on a project which will take those systems and find an allocation based on signal strategies as input.
Those signal strategies define the cash level and the system to trade (e.g. capital allocation to cash or to different strategies in the set).

I am asking myself if this is no to much optimization, we will see in the OOS Performance!

Best Regards
Andreas

Jan 12, 2022 3:59:51 AM       
Jrinne
Re: Are we over fitting?

(I don't remember how I came up with the 1.96 times part. I must have read it somewhere.)

Yuval,

Just in case you happen to be interested, 1.96 has its own article on Wikipedia:

"In probability and statistics, 1.96 is the approximate value of the 97.5 percentile point of the standard normal distribution."

Link: 1.96

Jim

Great theory, "and yet it moves."
-Quote attributed to Galileo Galilei (1564-1642) gets my personal award for the best real-world use of an indirect proof or reductio ad absurdum.
`

Jan 12, 2022 4:10:02 AM       
Edit 3 times, last edit by Jrinne at Jan 12, 2022 10:55:52 AM
Jrinne
Re: Are we over fitting?

All,

For new members who have not been around long, there are posts above from someone who lost all of his money for investing when there was a drawdown in his account. He has shared this in the forum before so I am not revealing any secrets. I am grateful that he has freely shared much on this forum. He started investing again with money from his income as he continued to work and is doing well now by all accounts (as I understand with the information I have). Good. He is sharing the perspective he has gained about drawdowns above. Double good.

Thank you everyone for sharing your experiences. Much can be learned from this forum including an understanding that people can learn different lessons from the same experiences--or at least have a different perspective. Also this helps one keep in mind that--like everything on the internet--some details are understandably left out for brevity. It is not the internet's fault (rather it is mine) when I can't remember what has been written before (or has been left out entirely at times) and bring some perspective to what is being written now: this applies to news stories even more than it does to the P123 forum.

Thank you everyone for sharing and it is a true learning experience that goes beyond investing at times.

For those who think Daniel Kahneman (Nobel Prize winning author of Thinking Fast and Slow) might occasionally have some words of wisdom: he calls the inability to bring outside information into a discussion (or thought process) "WYSIATI," an acronym for What You See Is All There Is. I am not going to apologize for not making WYSIATI a personal policy if anyone objects to some perspective on this subject.

Best,

Jim

Great theory, "and yet it moves."
-Quote attributed to Galileo Galilei (1564-1642) gets my personal award for the best real-world use of an indirect proof or reductio ad absurdum.
`

Jan 12, 2022 4:26:34 AM       
Edit 45 times, last edit by Jrinne at Jan 12, 2022 10:29:20 AM
rtelford
Re: Are we over fitting?

How does one average ranking systems together?


Pretty basic. Take the systems that work best and average the weights of each node. Some nodes might be missing in some systems, so those would get 0% weight in those systems and maybe 2% or 8% weight in others, and you just use the average.


Yuval, I know you've written about this in the past. Just curious though, say you take the full universe, and optimize factors/ranks for the full universe. Would the sim results be roughly the same compared to breaking up the universe into 4-5 sub-universes and respective optimized ranking systems, and their average? Then again, maybe its the consistency of factor weights across the multiple universes that shows more promise? I'd be curious to hear what your experience has been.

Thanks.


I think they'd be pretty similar. It helps me sleep at night to know that my strategy worked well not only on the main universe but on five random subuniverses. Maybe I'm subjecting myself to too much extra trouble. I don't know. I've been doing it this way for a while now.

So let's say you're testing on your whole universe but using five times the holdings you normally would. Do you optimize the weights of your ranking system by increments of less than 2% or 2.5%? That smacks to me of curve-fitting. If not, you'll never get more than 40 or 50 factors. (That is, unless you use composite nodes . . .)

If, on the other hand, you optimize on five different universes, you get a variety of different weights that you can average and you can get a lot more than 40 or 50 factors. If you're like me and you think the more factors the better, then that's a good thing. It also gives me a valuable perspective on how mutable my ranking system can be and how differently it can work for different groups of stocks.

Another thing I do is I look for statistical ties. You take all your results, find the standard deviation, multiply it by 1.96, and divide by the square root of the number of tests you did. Then you look at your top few results. If the difference between them is less than that number I just told you about, then they're statistically tied, and you can average not just the best, but the second best, third best, and so on.

(I don't remember how I came up with the 1.96 times part. I must have read it somewhere.)

What would certainly be different--and maybe better--than dividing your universe randomly would be to divide it by subindustry or size or something else and then take the average of the optimized systems. You'd get more variety that way. The key is to have relatively equally sized universes and not have stocks migrate from one to another.

The goal here is laid out in this article: https://blog.portfolio123.com/the-magic-of-co...gies-can-improve-results/


That's great Yuval, thanks. Your SA piece clarifies for me, but a couple of questions (I also commented on your SA piece):

1. If you took the top 5 or 6 ranked stocks from each strategy (for a total of 20-24), how do the results differ from the combined strategy of 25 stocks?
2. Are the ranking systems you used "public" by chance? Would be great to see the details. If not, I get it ;-)

Thanks,
Ryan

Ryan Telford -- also find me at:
Seeking Alpha
Twitter

Jan 12, 2022 11:57:50 AM       
yuvaltaylor
Re: Are we over fitting?

I can't find those ranking systems, I'm afraid. Sorry about that.

Yuval Taylor
Product Manager, Portfolio123
invest(igations)
Any opinions or recommendations in this message are not opinions or recommendations of Portfolio123 Securities LLC.

Jan 12, 2022 12:48:46 PM       
Posts: 42    Pages: 5    Prev 1 2 3 4 5 Next
 Last Post