Change over to current year's and next years's S&P earnings estimates is big problem for timing models

Marco

There is a big jump in #spepscy which must be a function of changeover in fiscal years (see below). But it throws all timing models way off because analysts are lowering their estimates not raising them. This needs to be addressed ASAP as rebalances will be effected.

1/23/16 1906.9 4.086946964 116.9872971 126.9026337 113.3904953 120.6726685 90.81519318 2.048 6.134946346
1/30/16 1940.24 4.214482307 119.2370987 131.8431244 112.9021912 122.1695786 90.61605072 1.931 6.145482063


Fed Model.tiff (61.2 KB)

#spepscy has been revised twice already in the last 2 years. Better not to have it in any market timing or hedging rules.

For those subscribers who rely on #spepscy as is, please don’t revise the time series.

If #spepscy can be improved, please create a new time series. If possible, please bring back the old versions of #spepscy as well!

Hugh

This is clearly wrong…the transition to the new fiscal year has created bad data…analysts are cutting their estimates not raising them…we need smart data management not legacy bullshit

Earnings estimate data is not continuous like the cash price of oil or the number of Americans with jobs. Earnings estimate data relates to specific quarters or years and must be linked in some fashion to create continuous series. This issue is similar to the challenge faced when creating continuous time series from commodity futures contracts.

There is no absolute current quarter or year. There are discrete time series that relate to quarters (for example q1, 2015; q2, 2015, q3, 2015, etc., and years, for example, 1999, 2000, 2001, etc.) and they are considered current (or not) because of assumptions we make related to the date on which we are evaluating the data. So one reasonable person might presently consider the current quarter for a company that has not released its 4th quarter, 2015 results to be the 4th quarter of 2015 and another reasonable person might consider the current quarter for the same company to be the 1st quarter of 2016. The issue is made more complicated in late January because many companies have released their 4th quarter, 2015 financials but many have not. And of course there are companies whose fiscal year does not end December 31st.

Therefore it’s not that a transition to a new fiscal year has created bad data, it’s the mixing of 2015 and 2016 earnings estimates that - at this moment - is creating results you hadn’t expected.

How would you like p123 to solve the problem?

Hugh

is it really?

From what I see in the data, there is almost always a spike from Jan - Feb, if not, the market is really in Trouble.
Do not know exactly, but I think analysts simply give their estimates for the new year in january and that Spikes the data.
Therefore I use averages on the specsy + averages on the Price and that works pretty well.

This year is really foggy though, next year earnings spike pretty good, quaterly earning (blend q) sink heavily, this years estimates
not great but not bad and blended data (Blend CNY) spike only a Little.

Regards

Andreas

Very thoughtful analysis. The reality is that the front panel on P123 shows analyst estimates with green arrows because of the changeover and that is counter to what is really happening on Wall Street. The 2016 S&P estimates have dropped sharply over the past 6 weeks from $127 to $120 and are likely to keep going lower. Am I the only person who sees this disconnect between the green arrows and reality?

The help section defines the arrows as:
"indicating whether the trends of four S&P500 metrics are above or below their 40 week averages. "

Both are currently above their 40week SMAs. One can easily argue the trend is down but the arrows just reflect what is happening to date. I don’t have an independent way to verify the P123 SMAs for both. I assume they are right. The P123 database now shows:


Capture.JPG

http://www.yardeni.com/pub/peacockfeval.pdf

I don’t see problem with P123 data.

Current S&P500 bottom-up report earning estimate is $121.88
12/31/2016 $31.24
09/30/2016 $31.08
06/30/2016 $30.75
03/31/2016 $28.82

Last month it was $117.98
12/31/2016 $31.42
09/30/2016 $30.28
06/30/2016 $29.02
03/31/2016 $27.26

I don’t think there’s a problem. This is the normal jump when the majority of the companies’s NextY becomes CurrY (Q4 reported around Jan-Feb)

We should just show the trend of the blended series to avoid confusion ( blended series is still below the moving average ).

NOTE: the literature of the “fed model” Risk Premium (RP) uses CurrY . The blended one would be more “correct” IMHO. Maybe we’ll create a RP using the blended estimates as well.

Dr. Yardenis’s default chart (http://www.yardeni.com/pub/peacockfeval.pdf) in page 3 is the “52 week forward” which is smooth and looks a lot like our blended series. It’s defined as “time weighted avg of curr & next year”, which sounds a lot like what we do. Ours looks more jumpy (every three months) than theirs because we use quarters to scale in NextY into CurrY. They are likely scaling in NextY on a weekly or daily basis. Shouldn’t be too hard to do.

Cool!

marcc is raising a different issue. He’s concerned that the number is jumping at a time when analysts are reducing estimates.

This raises the difference between an econometric indicator versus a sentiment indicator. The estimate series functions, in timing models, as an econometric indicator. Up is good. Down is bad. We don’t care how the sausage gets made. All we care about is the final number and like many econometric measurements, it does tend to rise year after year more often than not. Whether the market leads that, moves with it, or lags it is an issue for study and testing.

That analysts are cutting estimates now is a matter of sentiment. It’s an important one, but its not part of the SP consensus series – except to the extent it influences the final sausage.

Should there be a sentiment measure along these lines? Back when I was at Reuters, I floated the idea of some sort of Estimate Revision Index (this was at a time when all sorts of indexes and derivatives based on them were coming out – home price indexes, weather indexes, good governance indexes, etc.). But Reuters being Reuters, it went nowhere.

Those who are interested in adding such a sentiment indicator for timing models might want to try to build it in the Custom Series tool.

Hi Marc,

That is a wonderful idea and definitely worthy of investigation.

marcc might also be concerned about the following:

#spepscy showed a value of 116.99 last week, was 119.24 on Saturday, and is 117.49 this morning.

How is that possible?!

Hugh

It would be awesome to have a time-weighted average of consensus operating earings estimates using weekly data (and weighting, not quarterly) like Yardeni Research.

:slight_smile: