Perturb Your Simulation With The Random Function

The problem with stock trading systems is that they are usually based on an ideal, a single simulation that has been either consciously or subconsciously optimized. System perturbation to the rescue. It won’t solve all optimization issues but it can generate more realistic profit/loss statistics than you be might able to obtain by other means.

The Random function can be used to inject either noise or randomness in a ranking system node or Buy/Sell rule. The randomness will cause differing results every time the simulation is run. After multiple runs, the results can be averaged, providing figures that are less optimized, less ideal, and more realistic than the “one-off” simulation.

How To Perturb Your Simulation With The Random Function

Steve

Hi Steve,

point well taken.

Another tool that is in my opinion heavily underused is the rolling backtest. Below two examples for 1 year investment periods with one week offset: the latter contains your random buy rule.



Interesting - I’m not sure how you analyze that in a consistent way though. I’ll give it some thought.
Steve

“Declining” 50% of the buys might be a bit too much in my opinion. I like that your idea with Random could also be used with say cutting of 10% (Random <= 0.1) or 20% (Random <= 0.2). In the past I had used EvenID = true/false, but Random is much, much better. Thanks, Steve!

You mean Random < 0.9 or Random < 0.8.

I use 50% because one stock could be contributing a great deal to the overall result. If it were thrown out 10% of the time, it would still be included 90% of the time. The anomaly would still heavily weight the annualized return but could only be picked up by examining the standard deviation of annualized returns. I guess the problem is that I will be using the perturbed average annualized return in future posts. I don’t want it contaminated by single stock anomalies :slight_smile: Maybe I’m a bit paranoid, I don’t know.

Steve

Thanks so much for this Steve. (All your sharing is appreciated, but as a habitual over optimizer thanks for this one in particular :wink: )

Comparing the two histograms gives you a good visual comparison of the randomness effect. Bear in mind that both rolling test here have over 800 1yr simulation periods, thus statistically perhaps more representative than an optimisation study with 20 permutations. What do you think?

Are you sure that random is reseeded for each of the 800 passes? Just asking.

Walter

It would appear that the 800 passes are all based on the same simulation. There are not 800 different simulations in the rolling backtest.

Not sure about that, Walter.

Perhaps it is better to stick to an evenID test in this case.

Just so we don’t get off track, the purpose of my post is to generate annualized return and maximum drawdown in a consistent fashion that you can compare against other models and configurations of model. Visual criteria doesn’t work for me. If you simply want a robustness test, then yes… knock yourself out with evenID, rolling backtest, etc. There are all sorts of wonderful tests you can perform.

Steve

The average results of various random > 0.5 tests is always going to approach the result of a test with twice the number of held stocks. So it’s not right to compare the 11.3% return to the 15.3% return–the former is basically for twenty stocks and the latter is for only 10. If your ranking system has any merit, you’re almost always going to get higher returns for 10 stocks than for 20. And you’ll therefore always get lower average results with random tests. The best way to compare random tests to real results would be to vary the number of stocks in your sim by the number of stocks that random excludes.

Yuval, surely you mean “random < 0.5”.
Interesting, if you exclude all the selected stocks with “random < 1.0” the randomness disappears and you always get the same result.
It is not obvious since Random: Returns a random number uniformly distributed between 0 and 1. Any explanation for this?

yuval - you certainly make a good point. My take on this is that a ten stock portfolio will inherently be more volatile than a 20 stock portfolio. Volatility ultimately biases towards lower average profits and higher drawdowns for the simple reason that a 50% drop in price does not equate to a 50% rise in price. Thus a ten stock portfolio when averaged over multiple iterations will converge on a lower annualized return and higher maximum drawdown than an equivalent twenty stock portfolio. Remember that the twenty stock portfolio is rebalanced every week, whereas the end-statistics of the ten stock portfolios are averaged once.

Unfortunately, I can’t run tests to demonstrate this right now as I am doing other things. Perhaps later I can run some tests to see if I am right.

The other thing is that it isn’t strictly twenty stocks, for that to be the case the algorithm would have to accept all stocks below the tenth-ranked. In actuality, the lower-ranked stocks have a 50% chance of being rejected as well.

Steve

Random <= 1.0 will always be true because–as you say–random returns a random number uniformly distributed between 0 and 1. Same as putting “True” into the buy rule.