Limits raised for simulation and live strategy positions

We have raised the limits for strategy positions for simulations and live strategies to 5,000 positions if you have an ultimate-level subscription or are a tier 2 or tier 3 research provider or asset manager.

This allows you to create your own benchmark and track it live; create a live tracking system for a percentile range of a ranking system; or simulate a strategy that might, at some point, hold more than a few hundred stocks.

Whoa! Game changer for me.

The new API is another way to get data—probably better ultimately.

But this is an effective way to download rank data and returns.

Thank you Yuval and P123!

Jim

I am sorry - I do not understand any of this :slight_smile: but one line Yuval said intrigued me.

I usually create custom unviverses before I do any ranking. Ideally, i would like the custom universe performance to be the benchmark for my model. Does this allow me a way to do that? If so, how do i go about doing it?

Thank you P123 for all the great work!

Not really. Here’s what I meant.

I use a custom universe benchmark by backtesting a screen that simply holds everything in my custom universe with three-month rebalancing. I then download the performance into Excel.

Now I can do that in a live strategy so that I don’t have to rerun my screen in order to get the latest benchmark returns. I can even track that strategy in Manage and it will give me the live performance every day so that I don’t have to wait for the evening update to calculate it. I can now follow my own custom benchmark live.

For some time now we’ve been discussing the prospect of allowing users to create their own custom benchmarks to replace the ones we offer for their strategies and screens, but that is quite a difficult and time-consuming project. In the worst-case scenario, it would be like running two very different simulations at the same time. Instead, we’re working on expanding the number of benchmarks that we offer.

I am not sure I understand this completely either. But getting more data from fewer sims (or even just one sim) is unquestionably a good think—no matter the details of the implementation.

Like Yuval, I like to use a custom universe. Specifically, I like to use the universe itself as the benchmark.

I have used sims to do this before but it can take quite a few. I have not tried to do this with the reduced limitation but I think this will make it easier with more data per sim (which translates to fewer sims to do it).

Whether it be for what Yuval is doing, I am doing, or whatever people are doing with the API downloads this should be helpful. Getting the most appropriate benchmark being just the beginning of how this can be useful, IMHO.

Thank you again.

[quote]
For some time now we’ve been discussing the prospect of allowing users to create their own custom benchmarks to replace the ones we offer for their strategies and screens, but that is quite a difficult and time-consuming project. In the worst-case scenario, it would be like running two very different simulations at the same time. Instead, we’re working on expanding the number of benchmarks that we offer.
[/quote]Why not allow a aggregate series a benchmark? For example, here is equal weight S&P 1500.

Bump. Is this doable?

Benchmarking is vital. How else do you know when to switch strategies or allocations?

[quote]

Everyone has different styles for benchmarking. You’ve created a series that rebalances to equal weight daily. Personally, it would never have occurred to me to create a series that rebalances daily.

If we allow users to use their own aggregate series as benchmarks, what happens when the series has not been run for a while or the date on which it started is later than the date of the screen or simulation that the user runs? And there are a lot of aggregate series that one could create that simply won’t work as benchmarks. For example, how would one measure alpha and beta against a benchmark based on the ten-year treasury rate? Linear regression doesn’t work unless the benchmark has some correspondence with the equity performance. Similar problems could arise.

We need to think this out carefully and plan accordingly. Allowing users to use aggregate series as benchmarks may indeed be the best way to go, but I don’t know yet.

Yuval,

I am afraid that with this limited workaround, benchmarking will not get the attention that it deserves. Please put yourself in my shoes. Being on the legacy plan, I don’t have access to your new feature. These past two years have highlighted the fact that cap weighted indexes are a poor benchmark for us. Portfolio123 is about empowering us with knowledge. But how can I make good decisions without a valid benchmark?

Yuval, we know that you can be quite creative. Please find a way to give us proper benchmarks. I don’t care how. It doesn’t have to be a series. Or it can be some souped up version of the series tool that automatically updates itself. I believe that you can figure out a way to do it.

The bottom line: Better benchmarking → better decisions → better success for us → more subscribers for you.

Chaim,

IMHO, people should mostly forget about any machine learning unless they can get an equally-weighted benchmark that is similar to their universe. Obviously, you—using your techniques–can use this too.

But focusing on machine-learning for a moment, I do not see how the recent shifts in the market from small-caps to large-caps (mega-caps?) wouldn’t completely skew the results of any machine-learning/AI/statistical methods.

Anyway, I can manipulate the data to get the equally-weighted stocks from my universe as the benchmark at home. Just takes a little concatenating of the data I already have from the sims.

Is P123 committed to not doing that or at least not making it easy? Seems like it at times. Other times Marco seems like he wants this to work. But without this I think the machine-learning/AI effort will go nowhere.

People will not get meaningful results using machine-learning/Ai and be puzzled as to why without the using the a good label.

For now, I do not think anyone is claiming that getting the array with column headers: date, ticker, Factor1, factor2……,factorn, excess returns relative the the equally weighted universe over the rebalance period is any fun. But some serious people are finding a way to do this at home.

Chaim, how much time do you spend doing this—if it is possible at all for you?

We keep hearing an API that will do this is just days away. Probably true. So no worries.

Anyway, even though we use techniques that seem different (at least on the surface) I could not agree with you more about this. And again, unless members can at least find a way to do this at home using the screener, Python, spreadsheets (or whatever) then skip the machine learning/AI would be my recommendation.

Just FYI for anyone interest in doing or promoting machine-learning/AI. I can live with whatever P123 decides to do. I have moved to ETFs and want to help Steve (and P123 if they remain interested) should they decide that they want to continue to pursue this. But I guess it could become so easy to get just that one single array that I might start stock-picking again. Could happen.

As they say in the musical Annie about the future of machine learning: “….Tomorrow, Tomorrow’s just an array away”

Best,

Jim

[quote]
I do not see how the recent shifts in the market from small-caps to large-caps (mega-caps?) wouldn’t completely skew the results of any machine-learning/AI/statistical methods.
[/quote]That’s just it! With the equal weight benchmarks composed of the same universe as your model, you can detect a shift in the market and pull the plug on a model before losses pile up. I have been doing a lot of that over the past two years, but I would have had much better results if this platform would have built-in support for it.

Yes, I can pull the data into Excel. But it’s too much work and therefore not practical for most of us to do that on a regular basis.

Thanks for the comments.

In the short term, we will be adding the following benchmarks to our Research list:

S&P Small Cap (IJR)
S&P Mid Cap (IJH)
NASDAQ 100 3X Long (TQQQ)
NASDAQ 100 3X Short (SQQQ)
VIX (VXX)
Russell 2000 Value (IWN)
Russell 2000 Growth (IWO)

Custom benchmarks is a longer-term project. Here are five possibilities for how they could work. What are your thoughts on these?

  1. At the same time as the engine runs the screen/ranking system/simulation, it would generate a series based on the universe in use (equal weight) with quarterly rebalancing and no slippage.

  2. Allow users to use the equity curve of another screen backtest as a benchmark. Users could then create any benchmark they want by running a screen backtest. This immediately presents problems, though, for live strategies, as screens are not automatically updated.

  3. Same as above but using aggregate series. This is much less problematic since series are automatically updated when used in a live strategy.

  4. Create a new tool that enables users to generate custom benchmarks. This would be the most work for us and take the longest.

  5. Enable all ETF tickers as benchmarks. This would probably be the easiest to do.

I’m favoring #1, but please let me know your thoughts. If #2 or #3 were done, then there would be error messages anytime you tried to run a screen or simulation and your benchmark dates weren’t covered (i.e. if your screen started in 2005 and your aggregate series started in 2007).

Personally, for years I’ve been pulling screen results into Excel to use as benchmarks. It’s a bit of a hassle, I admit.

Yuval,

This sees almost perfect.

Honest question: how much does one lose with the quarterly rebalance? Seems like there might be some stocks that move in or out of the universe that you do not pick up immediately but shouldn’t it track a weekly rebalance closely? For stocks that remain in the unverse a weekly or quarterly rebalance makes no difference?

This would track something like SP500 or Russell 3000 (not based on liquidity or volume) even closer than my usual universe—missing the entry of Tesla by a limited number of weeks perhaps.

I would be interested in what Chaim and others say but I think this might be me preference for machine-learning/AI on first blush.

Thanks.

Jim

[quote]
Custom benchmarks is a longer-term project. Here are five possibilities for how they could work. What are your thoughts on these?

  1. At the same time as the engine runs the screen/ranking system/simulation, it would generate a series based on the universe in use (equal weight) with quarterly rebalancing and no slippage.

  2. Allow users to use the equity curve of another screen backtest as a benchmark. Users could then create any benchmark they want by running a screen backtest. This immediately presents problems, though, for live strategies, as screens are not automatically updated.

  3. Same as above but using aggregate series. This is much less problematic since series are automatically updated when used in a live strategy.

  4. Create a new tool that enables users to generate custom benchmarks. This would be the most work for us and take the longest.

  5. Enable all ETF tickers as benchmarks. This would probably be the easiest to do.

I’m favoring #1, but please let me know your thoughts. If #2 or #3 were done, then there would be error messages anytime you tried to run a screen or simulation and your benchmark dates weren’t covered (i.e. if your screen started in 2005 and your aggregate series started in 2007).

Personally, for years I’ve been pulling screen results into Excel to use as benchmarks. It’s a bit of a hassle, I admit.
[/quote]Yuval, My preference is #1 as well. But if #2, #3, or #4 are easier and will get done faster than by all means do those.

The main thing is to get it done.