Economic Significance of Special Items and Non-operating Income

Dear P123 community,

What is the economic of the two line items Special Items (“SpcItems”) and Non-operating Income (“ExpNonOp)”.

According to the Compustat financial model, the following relationship should hold in most cases:

Portfolio123 Codes
Sales()

  • CostG()
  • SGandA()
  • DepAmort()
    = OpInc() [or, OpIncAftDepr()
  • IntExp
  • ExpNonOp()
  • SpcItems()
    = IncBTax()
  • IncTaxExp()
    missing
    = NetIncBXor()

In fact, up to OpInc(), you can get about 98% matching in the whole Compustat universe depending on whether RandD() is considered.

So we all agree that OpInc() and NetIncBXor() to be important indicators of profitability (more or less depending on the context). But in order to get net income, we also have to include what I would consider to be “orphan” line items.

So, my question is what is the economic significance, if any, of these orphan line items?

E.g., what do these things tell us about historical costs, health, cash flows, accountancy, etc…? And, moreover, how do we capture and exploit this information in a systematic manner?

David -

There’s a good thread on this subject here: https://www.portfolio123.com/mvnforum/viewthread_thread,9676_offset,0.

I also remember reading a good academic study of special items, and it gave guidelines as to how to adjust net income numbers, using different percentages depending on whether the special items were positive or negative. What I do to calculate “real” net income is to subtract 80% of special items for the period. But that’s a rougher approximation than the study recommends.

  • Yuval

Yuval,

Thank you.

I will read vigorously.

//dpa

Actually, most of the classic Graham-Dodd text addressed recasting the financials such as to eliminate those items, which must be reported becasue the job of the accountant and the statements is to disclose, not analyze.

The idea for eliminating special items as best we can is because (i) we are interested in the future, not the past, (ii) special items impact, often in a big way, the past, but (iii) special items presumably won’t be present in the future. We want, ideally, to be able to recognize and invest in a company capable of growing 20% per year without being misled by a -12% growth rater that was caused by a non-recurrng special item.

Now, here’s the fun part. If we build models and test to see the effectiveness of efforts to factor out specials, don’t assume you will always like the results you see – definitely a counterintuitive situation. It’s a head-scratcher, but I think two things are happening.

One is simply the data culture in which we live: databases were designed by and quants often are people who never heard of Graham and Dodd and know nothing about companies, fundamentals, or stock analysis. So a lot of investment-trading money moves around as if special items were not at all special. And in a supply-demand marketplace, might (excess supply or excess demand) makes right, whether or not its logical.

Another is that special items can be read as useful information, not of value but noise (as I use the terms in the context of p123 strategy-design course). Standard analysis presumes normal business trends and normal interpretation of ratios to come up with a logical valuation. But the real world is usually very much non-standard. Oddball things happen all the time and special items draw our attention to these at the company level and if we harness the information well, we could develop strategies that capture potentially rising or falling levels of noise.

Thank you, Marc.

I get what you’re saying about the developers not thoughtfully differentiating between some economically relevant versus irrelevant items. I realize Compustat data is way better than the competition in this respect, but it lacks all the granularity needed in order to easily reconcile the differences “economic” and GAAP earnings.

For example, ExpNonOp() captures several aspects of a business which I would be consider to be “economic” like equity in earnings, capitalized interest, royalty income, and “other” expenses. However, some of these items can sometimes become conflated within other “core” line items like IntExp() and Sales(). This can lead to double counting. As beautiful thing as the Compustat data model is, there are nuances which complicate top and bottom line reconciliation.

Also, SpcItems() contains some information which I would consider to be important, such as writedowns. Asset impairments and write-downs which occur often should be – IMO – considered an operating expense. While these are not typically cash expenses in the period in which they are realized, they represent malinvestment which was capitalized within historical earnings.

The best way I’ve found to deal with SpcItems() is to average them over a sufficiently long time period. For example, a massive write down of one quarter reflects an overstatement of previous earnings. We know these SpcItems() are basically irrelevant over the short-run; i.e., one shouldn’t extrapolate too much on one bad quarter or year. However, we also know that these things have economic relevance when you average them over the past several quarters or years.

David -

Two minor comments:

  1. Stephen Penman advocates reconfiguring the income statement and balance sheet to separate out operating stuff from financial stuff. So you do certain things with lines like ExpNonOp and operating income ends up becoming NOPAT. This stuff is not easy to do with Compustat data, and I have major problems with a number of Penman’s basic assumptions, but if you’re interested you should look at his textbook Financial Statement Analysis and Security Valuation.

  2. P123’s numbers for operating income already take a lot of this into account. They’re not always the same as what you’ll find on the company’s financial statement–they’ve been adjusted in a very sensible way, as far as I can make out. I tend to use OpInc when I want to find out what a company is really doing rather than what they say they’re doing.

  • YT

All:

 When P123 switched data vendors, some systems lost a lot of alpha.  [See post: There goes my alpha]  A lot of that came from P123 and the new vendor trying to fit the data into GAAP, and its extensions. In other words, P123 and the new vendor started trying to report what was more accurate from and accounting point of view rather than what the companies were actually reporting. This is what caused the loss of alpha.  I tried, in vain, to point out that by fitting data to Marc's point of view, it made it harder to use it in other ways. Unfortunate.

Bill

Yuval,

I had not heard of Stephen Penman. I will check him out.

Also, I agree that OpInc is a pretty good measure which takes a lot into account.

Bill,

That’s good information which goes back before my time. Context is definitely important to me.

//dpa

That is a problem but unfortunately, one that is not readily solvable since maximum granularity often requires the footnotes and complete granularity would require changes to accounting rules. My biggest gripe is that companies are not required to, and usually don’t report the after-tax impact of special items. This is a situation in which we do the best we can and ultimately, rely on the saying that its better to be vaguely right than precisely wrong. My ways of coping with the inevitable “mis-specifications” that will impact our models are by diversifying the number of positions (I won;t use a port as small as five except in special circumstances) and factor diversification (i.e., to measure growth, I’d rather use 5-10 factors as opposed to one; I’m not bothered by the correlations. I’m more interested in diversifying away the impact of inevitable mis-specifications.

I happen to agree with this.

But whether one agrees with this or not it is similar to what someone studying machine learning might say about the problems with trying to fit the in-sample data too closely.

And, really, I think Marc has the same meaning here. “Mis-specification” is a cause for “noise” in the data. And trying to fit that noise can be a disaster—no matter how you look at it.

-Jim

Hi all,

Looking at all this tonight, I am quite puzzled by the data values from the items in the P&L tree from Primus (top of this thread) for SP500 companies.

  1. In particular, how do NetIncBXor() or NetIncBXorNonC() relate to IncAftTax? IncBTax? SpcItems? ExpNonOp?

  2. I am surprised by the sentence in the function reference help for NetIncBXor(): “CompuStat calculates this item as pretax income less total income taxes less minority interest.”
    Shouldn’t it also remove Extraordinary items by definition?
    I would have expected something closer to ~ (IncBTax - SpcItems - ExpNonOp) x (1-(txRate%/100))

  3. Do NetIncBXor() or NetIncBXorNonC() exclude SpcItems as well? (in addition to excluding Extraordinary Items)

  4. When I try to recalculate manually an after-tax net income that excludes special items and non-operating expenses, I get nowhere close for too many of the SP500 companies.
    Am I getting something wrong?
    Screener → ShowVar(@Computed_NetIncBXorTTM, (IncBTaxTTM - SpcItemsTTM - ExpNonOpTTM)*(1- (TxRate%TTM/100)))
    to be compared to → ShowVar(@NetIncBXorTTM, NetIncBXorTTM)

  5. Simply displaying the values of txrate%A for the SP500 companies is a bit surprising: it goes from -607% to +356% with a bunch of N/A in between.

  6. With such a wide of tax% range, my final concern: even using what Marc G suggests in the training packs (which I am going to call least worst undistorted underlying operational income → NOPAT = OpIncAftDeprTTM*(1-(TaxRate%TTM/100))) → this is certainly quite off the mark and we might as well apply a blanket 35% tax rate across the lot.

Any input is welcomed. I realized this msg is a bit of a hard read…

Thx

Jerome

Can of worms, Jerome. Can of worms.

Compustat uses an articulating financial statement model for MOST companies. There are some exceptions I think because S&P Capital IQ and Compustat cross-pollinate their data collection streams. There is SUPPOSED to be a firewall…

Anyhow, the closest reconciliation you’re gonna get is:

“ShowVar(@ModeledOpInc, Sales(0,ANN,ZERONA) - CostG(0,ANN,ZERONA) - SGandA(0,ANN,ZERONA) - RandD(0,ANN,ZERONA) - DepAmort(0,ANN,ZERONA) )”

That will hold for about 93% of the Compustat universe. You can get 97-98% correspondence if you can figure out whether or not to include RandD as an expense or capitalized asset.

Then we have the following to reconcile:

ShowVar(@ModeledIncBTax, OpInc(0,ANN,ZERONA) - IntExp(0,ANN,ZERONA) + ExpNonOp(0,ANN,ZERONA) + SpcItems(0,ANN,ZERONA) )

This will match for about 87% of the universe. Some special items are included as extraordinary items, but not all.

Following that, only income taxes and minority interests remain to find NetIncBXorTTM. But if you want to reconcile GAAP net income, you’re going to also need the statement of comprehensive income, which is not available to us and is mostly irrelevant to investors anyway.

Also, WRT point 6, your intuitions are correct. Better off to look at taxes an operating expense rather than a rate. You might also look at it as regional and industry specific problem:
setVar(@Group, eval(Country(“USA”),0,eval(Country(“ATG,BHS,BRB,BMU,CYM,DOM,VIR”),1,eval(Country(“ARG,BLZ,BOL,BRA,CHL,COL,ECU,MXN,PAN,PER,URY,VEN”),2,eval(Country(“CAN”),3,eval(Country(“BGR,HRV,CZE,HUN,RUS,SVK,SVN”),5,eval(Country(“BHR,EGY,JOR,KAZ,KWT,LBN,OMN,PAK,QAT,SAU,ARE”),6,eval(Country(“BWA,GHA,KEN,MWI,MUS,MAR,NAM,ZAF,SWZ,TZA,UGA,ZMB”),7,eval(Country(“AUS,IRL,NZL,PNG,GBR”),8,eval(Country(“BGD,CHN,HKG,IND,IDN,JPN,MYS,PHL,SGP,KOR,LKA,TWN,THA”),9,4))))))))) )

showVar(@TxRt_Group, Max(0, FSum(“IncTaxExp(0,TTM)”,#GroupVar)/FSum(“eval(IncBTaxTTM != 0,IncBTax(0,TTM),0)”, #GroupVar)) )

Jerome - You have to distinguish between special items and extraordinary items. They’re completely different things.

No, you have to do that manually.

The outliers are for companies with extremely low net income, right?

For NOPAT, try OpIncTTM - IncTaxExpTTM - 0.35*IntExpTTM. It’s not perfect, but it might work better than what you’ve got.

Just food for thought:

One might also think of normalized taxes as “value added taxes” (VAT) vice income taxes. Differences between tax accounting and GAAP accounting treatments, especially with regard to the timing of DD&A, make reconciliation of a “normalized” income tax rate pretty much impossible.

For example, how is it that we’re able to many U.S. companies declare positive net income and still claim a tax rebate (i.e., negative tax)? Likewise, how is possible for many other U.S. companies to declare a net loss and still pay sizable income tax?

Moreover, how complicated are your own taxes? Now, multiply that one-hundred fold to get an idea about corporate taxes.

A VAT cuts through a lot of this noise to answer a simple question. On average, how much cash flow goes toward taxes?

Concept:
showVar(@VATRt, FSum(“IncTaxExp(0,ANN)”,#GroupVar)/FSum(“OpIncBDepr(0,ANN)”,#GroupVar) )

A VAT, such as this, would apply only to operating income before depreciation. As such, it is not appropriate for evaluating tax shields and deductions due to interest/DD&A/ITCs.

Thank you both Yuval and Primus - I agree with the approach of using some sort of averaged tax by industry or else etc

Other than that I think “can of worms” is a good summary.

Jerome