Expressing Stability

Hi Everyone,

I guess this has already been addressed at some point in the forum, but how would you express stability it could be for earnings, ROE, etc, i.e I want low Price to 5 years avg earnings, but I want sort out those whose earnings are all over the place that distorts my average.

The only ways I can think of is doing a linear regression and then add a maximum standard error criteria. Or doing an average and look for those who have the lowest std dev. but there should be a more straight forward way to do this.

Thanks,

Standard deviations seem to work well. Both q/pyq and sequential q variances seem to have merit imho. Comparing to industry changes might also be helpful. LoopStdDev()

Echoing the same.

First thought is about problem framing: do you want to express the cross-sectional stability of some data (i.e., variability of the group or of one within a group), or are you concerned with a time-series (i.e., variability of one versus itself)? Doing both (i.e., variability of a grouped time series) is more difficult, but not impossible.

Regardless, standard deviation might be sufficient for normalized data (i.e., pretty much any ratio like returns, P/E, ROE, etc.), whereas coefficient of variation (standard deviation over the mean) is probably more appropriate for top level items (i.e., pretty much anything else).

For example, one could easily implement cross-sectional coefficient of variation as 1/ZScore(“formula”). One could also attempt to recover the non-extant function FStdDev by (“Data” - Aggregate(“formula”))/ZScore(“formula”). For example:

(Ln(Sales(0,ANN)/Sales(1,ANN)) - Aggregate("Ln(Sales(0,ANN)/Sales(1,ANN))",#Industry,#Avg,False,#Exclude,TRUE,FALSE)) / ZScore("Ln(Sales(0,ANN)/Sales(1,ANN))",#Industry,0,0,10)

That said, statistics based on mean squared residuals are sensitive to outliers and can give misleading impressions of dispersion if their assumptions are violated. Winsorizing/trimming is an option if you’re concerned about outliers. At best, trimming/Winsorizing removes anomalous points from the sample. However, this won’t compensate for violated assumptions, such as that P/E will be normally distributed (it won’t, and if anything E/P will be better behaved). Moreover, outliers are still informative data.

If we had something like FPercentile() (i.e., the inverse FRank()), I would say robust estimators like interquartile ranges could be used to supplement traditional statistics (see this request: https://www.portfolio123.com/feature_request.jsp?view=my&cat=-1&featureReqID=1412). Alternatively, one could implement other types of robust estimators such as median absolute deviation utilizing LoopMedian().

Anyway, I’m just spit balling here. Hopefully it helps.

EDIT:

I swear I didn’t look at Wikipedia to get my response, but one entry on robust statistics expresses the same ideas almost verbatim:

For EPS, look at the factor EPSSTABLEQ

EPS Coefficient of Variation.
It is calculated by taking the standard deviation of the 20 most recent quarterly EPS values and dividing by the mean.

https://www.portfolio123.com/doc/doc_detail.jsp?factor=EPSStableQ&popUpFullDesc=1

In addition to statistical solutions, you may want to consider fundamental approaches.

Stability is not something that just happens. It occurs or does not occur for reasons which are often tied directly to the fundamental characteristics of the company itself (usually relating to the stability of the business itself – toothpaste is more stable than gold mining) or the operating structure of the company (having a heavy percentage of costs be fixed tends to produice bug swings in earnings for small changes in sales).

If you want a low P/5yrEPS ration that is less likely to be distorted by crazy EPS numbers, you may want to pair that rule with something like Rating(“Basic: Quality”)>80 or something like that. You can work directly with ROE, Debt, Margin etc. but having a multifactor ranking system (whether the p123 built-in I suggested or something you design) can give you a lot with just one rule. And you might even go the other way; rather than seeking high quality stocks, you may choose to instead just worry about eliminating dumpster fires: Rating(“Basic: Quality”)>25.

I like the fundamental approach because it supplies a logical justification for predicting future volatility.

I’ve recently found that the judicial use of AltmanZ helps with dumpster fires. YMMV.

Walter

I prefer absolute deviation to standard deviation when looking for stability. For one thing, it’s more intuitive; for another, just one outlier will throw standard deviation way off. So, for example, for sales stability, one formula might be LoopSum(“Abs(Sales(Ctr,Qtr)-Sales(Ctr+1,Qtr))”,20,0)/LoopAvg(“Sales(Ctr,TTM)”,20,0), with lower numbers better.

I’d like to add that stability measures are important when your model is relying a good deal on recent earnings reports. For example, a company that makes its money off of patents is going to have extremely irregular revenue depending on the outcomes of its lawsuits. So without a stability measure your model might report very nice sales and EPS increases one quarter and devastating decreases the next. Adding a stability factor will lessen the impact that these companies will make to your model.

In addition, there are some measures–the ratio of accounts receivable to sales, for instance, or the cash conversion cycle–that simply should not be varying wildly from year to year. Large changes in those measures MIGHT be warnings that the company is engaging in earnings manipulation (or there might be other causes).

Yuval,

LoopSum("Abs(Sales(Ctr,Qtr)-Sales(Ctr+1,Qtr))",20,0)/LoopAvg("Sales(Ctr,TTM)",20,0)

Did you intend to use 20 Qtrs w/ 20 TTMs?

I’ve used a variant of your formula. The difference is in the denominator; my form would substitute 'Sales(0,Qtr)-Sales(20,Qtr)" for “LoopAvg(“Sales(Ctr,TTM)”,20,0)”. The more monotonic the data, the higher the ratio (with the max/min being +1/-1).

Walter

EDIT: In my formula, Abs(Sales(Ctr,Qtr)-Sales(Ctr+1,Qtr))",20,0) would be in the denominator - i.e. (Sales(0,Qtr)-Sales(20,Qtr))/Abs(Sales(Ctr,Qtr)-Sales(Ctr+1,Qtr))",20,0)

I could see that. Notwithstanding the credit-risk rhetoric, Altman is really just another Quality ranking system, although in that case, it’s set up as a scoring system. You could also use the Piotroski F score and the Benish M score or TATA (total assets/total accruals; the foundation of earnings quality work is based on the notion that greater reliance on accruals signals lower levels of persistence)

How can sales be negative?

And isn’t the entire point of assessing stability to weed out companies that for example have jittery EPS figures. WHy would you want to flip the sign and help these?

You must be misreading my formula. Sales aren’t negative. The difference between one quarter and the next might be negative. Standard deviation squares the difference to get rid of the negative, and I instead use absolute deviation–absolute value. And the lower the LoopSum, the better, so I’m not helping companies with jittery figures, I’m lowering their rank.

Ah thanks for clarifying, that makes sense. In your formula did you mean to compare the QTR deviation to TTM deviation or is that a typo?

From a ranking standpoint it likely makes little practical difference if the denominator is AvgTTMsales over the past 20 periods or AvgQTR sales over the past 20 periods. You can change the calc and compare results, but I doubt much difference in ranking results.