It Happened Again: Returns Changing Randomly

It seems that once again, there was a bad data upload or similar problem.

Last night I started writing a screen. When I began work on the model this AM identical returns showed, however now it returns 6% less annualized without any changes. Just now I started in on a second new screen. I had it open with a backtest showing on the screen. Seeing the return of the first screen drop I decided to re-run the backtest on the second with no changes. The return about halved.

Please fix.

Thanks,

Dan

Dan, Nothing changes on Sunday. There are no reloads, data updates, etc.

It could be that having one window open it interfered with the other (multiple browser windows share a common “session” on our servers ).

I am not sure what could have caused this. I don’t believe it was an open window issue as I can repeat the problem with only one screener window open. Here’s how I tested it:

The results from the last backtest run populate in the ‘My Screens’ page. The dates of the last backtest are also pre-loaded into the screen page. Thus hitting Run Backtest should produce identical results to the annualized returns listed on the ‘My Screens’ page.

To test this, I took my “For Qualitative Analysis - Div” screen. The annual ret listed was 18.3%. I ran the backtest again and got 23% per year. Its nice to get a higher return, but it seems questionable that the same backtest would produce two very different sets of results. While most screens’ returns do not change, this can be repeated for several screens from several date ranges.

I am not sure if I am mistaken in how the saved dates for the backtest populate vs. the latest return on the My Screens page, I apologize if I am wrong about how this should work.

Not sure where you are seeing 18.3%, nor how to reproduce. Can you attach a screenshot? Thanks

Unfortunately, I didn’t take a screenshot of that one. I did, however, take a screenshot of some screens I have created and the change in return. For the record, the models I actually use don’t have returns this bad. These screens are mostly unfinished ideas run over short time periods.

Notice the change to two models: NCAV w positive EPS and Under WC screen. Dates varied, one was last run 6/15 (which might have been the date of the bad upload) but another was last run in late May. Another, under unclassified, called yet another dividend model was run 6/20 but changed.



Hi Marco,

I hope you had a good holiday weekend! I’d like to bring this back to your attention and hopefully get some insight on what is going on here. Please let me know if the mistake was on my end or if there is anything I can do to help clarify the issue for you.

Thanks,

Dan

I took a look at the backest. The screen often holds very few stocks, many times just one, other times 0. So a single stock can easily account for the difference. It’s impossible to tell now without the record of transactions.

As to why a single stock may change there are several possible reasons from Compustat fixing old data (it happens) to us discovering a problem. Lately we found out that there’s a precision limitation in Compustat with junk stocks that have multiple reverse splits. It’s a problem because these junk stocks may not have been so 10 years ago. Take a look at Juniper - JUNP. It was a very real company long ago as I recall. It has had 1:500, 1:200 reverse splits, etc. Compustat recently deleted some of the older splits (to make “room” for the recent 1:500) because when multiplied together caused an under-flow exeption. We added them back manually .

The reverse split problem should not affect most simulations. But in the ones were one stock is held it can cause huge differences

Sorry Marco, I forgot about this. Most of the models I write are for mid-large cap stocks so the extreme data scenario probably doesn’t apply often. (The NCAV screen uses small enough stocks it could well have been affected by this). Based on the changes in results, it seems that 1-2 different stocks were likely purchased by every affected model over at least one or two testing periods. If you’ve run into data questions such as this with larger companies (ie 500m mktcap or more) this could very well be the cause.