Best way to optimize weight of nodes?

Is there a statistics best practice for coming up with weights for nodes? Some of the nodes seem inter-related (ie. some nodes “work together” with other nodes, etc), which makes it very hard to determine optimal weights.

Does anyone have any ideas? I’m not sure I’ve explained my problem very well, let me know if you don’t understand…

You can use the ranking system optimizer to create test cases.

Back in college, I engaged in some econometric analysis to see statistically which of those factors that were being examined were the biggest contributors to observed returns. We have nothing remotely like this on P123.

I don’t think there’s a “statistics best practice.” I’ve been trying to come up with one for years.

Here’s what I’d recommend. Assign every node a weight divisible by 4. Make changes in the weights to see if returns improve, but always keep every node’s weight divisible by 4. Once you’ve optimized the weights through slow trial-and-error, do it again but make each node’s weight divisible by 2. Stop there.

I optimize my systems using this method on subsets of my universe and subsets of my time period, always with a lot more holdings than I’m actually going to invest in, and then take the average weights from all of those systems.

This is probably way too much work for most people, and I often doubt it’s actually worth the effort.

OK thanks for the suggestions

I would be really careful here. In ClariFi they had this ‘genetic mutation’ feature where you could optimize each and every factor and group of factors to get the super-fit top back-testing performance possible. And the head trainer, whom I worked with extensively for 9 months, chastised me every time I used it. It was easy to make a Sharpe 3 product that failed. It was over-fit junk.

His recommendation was to keep groupings equal-weight and factors within the nodes at equal-weight. If there was a strong case as to why one should be more than another - fine. But don’t tweak it by small amounts.

For instance, you may want a val-mo rank with lower volatility. You may choose to assign the weights 40% value 40% momentum and 20% vol. As well, you will often find that short-term technical indicators (5 or 10 day stochastics) lead to higher turnover. So you may decide to include TA but give it a very low weighting like 10% to keep turnover down.

Every time the trainer gave me a slap for using the optimization function I felt annoyed as if he was trying to prevent me from making a better model. But those curve-fit models were not better. Not only did they perform worse than the non-fit models, it also was demoralizing to create a Sharpe 3+ backtest followed by a sub-Sharpe 1 performance.

Great thread and a good time to once again bring up the concept of “Dynamic Node Weighting”, something I’ve wanted for a decade.

Could be as simple as an eval based on the weight of another node, or as complicated as calling the node rank from a separate ranker and assigning a dynamic weight based on a formula.

I’ve been able to do this in screener which takes some heavy lifting, would rather streamline it in a ranking system…

The optimizer tool is decent for this if you have access. I use a weight matrix to test 20 scenarios at once. It’s clunky to set up he matrixes by hand, so I copy-paste from Excel.

Given the 20 scenario limit, my recommendation is to test 2 or 5 nodes at once, per timeframe. This is in order to maximize the number of scenarios (20 is a common multiple of both 2 and 5). So if you test two time frames in a single sim, you could run a 2x5 weight matrix at most.

It’d be nice to do some true parametric optimizations. Unfortunately, we don’t get variances and covariances from this tool, only returns. MVOs aren’t an option at this point.

I had planned at one point to try to get this data from the transaction logs of the ranking simulations, but it turns out these are difficult to parse. Probably not going to ever happen for me at this point.

Yea would be nice.

Or use those metrics for cross-validation.

Would be nice. Nice and professional.

-Jim