Index | Recent Threads | Who's Online | Search

New Thread
This topic has been viewed 403 times and has 3 replies
Risk Reduction with AI: a first look


I invest in individual stocks and use ETFs for risk reduction and diversification. I wouldn't be me if I did not look at a machine learning approach for ETF selection for risk reduction.

As a start--to avoid overfitting and cherry-picking--I used just SPY and TLT. I wanted to see if ML might improve on the old-school 60/40 split.

I started by using Portfolio Visualizer to find risk parity for SPY and TLT over the last 46 days. 46 days was arrived at by optimization. And I walked this forward (or Portfolio Visualizer did).

I then used various machine learning methods to find the probably that TLT would outperform SPY. Actually this is simplified for this post but is pretty accurate.

Using a Random Forest classifier gave good returns (better than SPY) and a good risk profile. But the Brier score or Brier skill score showed that is was not really a good predictor of the probability that TLT would outperform SPY. And the range of probability predictions was wide.

Logistic regression did better on the Brier skill score (better than the reference calcuation). And the probabilities were in a narrow range. Maybe too narrow to be very useful but it was realistic. As the literature would suggest, a Gaussian Naive Bayes Classifier was not so good for probabilities.

Wikipedia on Brier Score

Finally, I just multiplied the proportions of SPY and TLT recommended using risk parity and by the probabilities of outperformance of each ETF using logistic regression and normalized it.

I uploaded this "dynamic allocation" back into Portfolio Visualizer. I got something that slightly underperformed SPY with less drawdown and a better Sharpe ratio (image).

I have reason to believe that by replacing SPY with a mixture of ETFs one can do better. SPY can be replaced, in part or completely, by individual stocks but also GLD or REIT ETFs to further reduce the volatility.

This can then be mixed with with whatever riskier, higher-beta strategy one believes in as part of a strategy to control risk. Or someone might mix this with a more conservative fund (like AGG). Again, to approximate whatever risk profile one desires.

This is a first attempt that, probably, I will not end up using. I spent maybe 2 hours on this and I will make a guarantee that it is flawed at this point. Pros like Chaim, who recently posted on risk reduction, probably have better strategies at the end of the day. Other people with finance degrees (who are not necessarily in the business of constructing portfolios for customers) have also posted on this and probably have some more developed ideas.

Edit: FWIW here are the drawdowns for just risk parity alone (first) and then AI discussed above. Not much difference. The AI had better annualized returns and a better Sharpe ratio (compared to risk parity alone) but it was not dramatic. This is consistent with the ranges in probability predicted by the logistic regression being narrow (above). I don't know if this can be improved upon. As I said a first look that I am not selling or even recommending to anyone.

Personally, I'm still not funding it. Some of my portfolio intended for risk management is managed by professionals and you might consider doing the same if you are investing in some high-beta strategies (as I do).


Attachment ML SPY TLT.png (249808 bytes) (Download count: 85)

Attachment Drawdowns.png (299627 bytes) (Download count: 77)

Attachment AI .png (238592 bytes) (Download count: 81)

Great theory, "and yet it moves."
-Quote attributed to Galileo Galilei (1564-1642) gets my personal award for the best real-world use of an indirect proof or reductio ad absurdum.

Jan 7, 2022 6:25:18 AM       
Edit 25 times, last edit by Jrinne at Jan 7, 2022 10:08:07 AM
Re: Risk Reduction with AI: a first look

Thanks for sharing Jim,

The best research I have found on risk parity is "" Some of their funds have 85 different future contracts.
I'm not sure how you could use AI to minimize risk. I do constantly look for better ways to reduce risk. Like anything, the assets you pick and timing determine a lot. I am currently running my risk parity with these high level books which have multiple books in them and multiple assets. Each of them has a different lookback and ranking process. Then at the high level the ones with the best sharp ratio I trade and I rebalance those monthly. I also use a permanent allocation of the ETF "TAIL". this has about a 4% negative carry each year but when volatility rises it minimizes drawdowns. At least it has in the past. You can see that my books are heavily waited to hard assets since I believe inflation is coming. Backtesting would not tell you this. I have made a guess based on my macro view. If I'm wrong it won't cost me the farm the model will slowly move out of these inflationary assets.

One idea for the AI would be to pick the assets that best perform in each book and then against all the other books. Since the permutations and combinations are endless an AI program should be able to find patterns that I have no chance of ever finding.


#Crypto and leveraged assets top 2
#16 DOW 30
#Realestate and Credit
#16 US Market Strategy
#16 Metals
#16 Energy
#16 Commodities High
#16 Nasdaq100 leaders
#16 Hedge Agressive

Jan 7, 2022 3:40:34 PM       
Re: Risk Reduction with AI: a first look


Thank you. As you already know risk parity can be used for a lot of different things.

While valuable in its own right, it is a rational start for many strategies for multiple reasons. These benefits are not limited to AI, of course.

Thank you for your ideas on how to use risk parity.



Great theory, "and yet it moves."
-Quote attributed to Galileo Galilei (1564-1642) gets my personal award for the best real-world use of an indirect proof or reductio ad absurdum.

Jan 7, 2022 6:11:14 PM       
Re: Risk Reduction with AI: a first look


Here is an OVERFITTED version of the above. Again, the selection of just 2 very standard ETFs was an attempt to reduce overfitting and cherry-picking. Still, while admitting the problems of overfitting, I do use something like the model below for part of my portfolio. But some of my portfolio is managed by a professional and using a professional would be my recommendation for most members looking for risk reduction.

For people who want to increase their risk there is leverage (I am not against that). In fact, the professional reducing my overall risk also uses some leverage for some assets. Everyone can decide how to do that on their own. It would be hypocritical to say anything other than one should be mindful of risk--and perhaps, which assets to leverage as well as (perhaps) when to do it (should you decide to use leverage).

The original post was mainly an attempt to dissect why this last model MAY work to some extend (other than being overfitted).

First, relative strength (if it works at all) does not work for the reasons most people think, IMHO. Or at least not just for those reasons.

Relative strength can also be looked at as a random sampling method (over time). When one considers this, then in many regards it becomes like a: Randomized weighted majority algorithm. If relative strength works for other reasons, that would be a nice bonus. But a randomized weighted majority algorithm is the most powerful algorithm in existence for some assumptions of uncertainty.

And of course there are considerations of how stationary the data is. It looks to me like stock (and ETF) data is not very stationary. So using relative strength with a short look-back period is like using a Roaming Bandit Algorithm. Wikipedia link to multi-armed bandit problem where the Roaming Bandit Problem assumes the data is not stationary: Multi-armed Bandit

But the second thing is that I think volatility can be used as a factor to make predictions about the future direction of a stock (or ETF). Minimum variance and risk parity may be using this fact. Albeit in an indirect manner.

The volatile, high flying stock or ETF is the one most likely to crash going forward. Risk parity reduces the weigh of those tickers that are unlikely to outperform. Minimum variance may place no weight at all on such a ticker.

My only point is that what you are doing is a great method and needs no improvement. My attempts to sort out why it works may just be an interest of mine. If I ever tease out why it works, any efforts at marginal improvements based on those insights (e.g., a random forest classifier) may, or may not, be useful.

For now, I think relative strength may work but not for reasons that are discussed much. And reducing volatility is valuable without anything else going on. But in addition, volatility is probably a factor that is somewhat predictive to the future direction of a ticker. Using risk parity probably aids in increasing the weights of the assets that are most likely to outperform in the short-term.

Or not. But that is what I am looking at for now. This actually borrows heavily from classical financial theory and consulting someone with a degree in finance (who has fully developed some of these ideas) would be the best bet for most people.

Not a new or alien concept for most (any) professionals. I get that. And surely there are better ways to do it. I also get that. And while they might not like all of the math above, I honestly think they would not be wrong if they said: "You do not need it."



Attachment Heavily Overfitted Strategy.png (251950 bytes) (Download count: 41)

Great theory, "and yet it moves."
-Quote attributed to Galileo Galilei (1564-1642) gets my personal award for the best real-world use of an indirect proof or reductio ad absurdum.

Jan 9, 2022 6:56:52 AM       
Edit 34 times, last edit by Jrinne at Jan 9, 2022 9:42:31 AM