Why and how systematic strategies decay

All,

James (ustonapc) sent me a paper that may give some insight as to how much we can expect factors to decay out-of-sample. And perhaps the likely reason(s) for decay (which could vary from factor to factor of course).

Why and how systematic strategies decay

According to the authors their results show: “….out-of-sample performance drop of on average 50% that was reported by other authors.”

The authors conclude that the most important factor in this drop is the date of the publication. Suggesting it is getting harder–as time moves forward—to find good factors that have not already been published. I am guessing this is correct. Makes sense intuitively anyway.

Decide for yourself whether this is because authors need to use more complex factors (with more possible overfitting) to publish new factors that are significant or whether people are getting quicker and better at arbitraging away the published inefficiencies. Or perhaps, whether the authors fail to answer this question adequately.

Jim

Yes, in one of Larry Williams’ books written back in the 1980s he suggested that you should expect half the profits and double the drawdown that your simulation indicated, so I am not surprised by the 50% OOS observation. Larry Williams invented the Williams %R and won the Robbins Trading competition in 1987 by turning $10K real money into $1 Million in one year. On the 10th anniversary (1997) he entered his daughter (Michelle Williams - you may recognize as a Hollywood actress ) into the same contest and won, turning $10K into $100K.

Given this observation made back in the 1980s (50% of profit/50% higher drawdown), I don’t think this has anything to do with “getting harder”, but more to do with the phenomenon of data mining. We tend to select the strongest factors which are overachieving and then mean revert.