Risk: Thinking creatively

This is an effort to start a new thread based on a topic introduced (by yours truly) into the UI thread. Here’s a copy of my initial post there:


If users want the UI, I have no objection to it being added.

Still, I can’t imagine myself ever using it. While I get what it tries to do, I think it does its job very ineffectively, much like Beta, Sharpe, Sortino and Max DD. I’ve said before and I’ll repeat here that the whole area of risk management is one that has been very badly bungled by the quant community as a whole – and UI is just another instance of, pardon my language, mathematical masturbation.

The problem is that all these measure focus entirely on historical stock prices, which do not tell us anything about the factors that may cause a stock to sustain an exorbitant loss in the future.

Stock price movements are caused by combinations of three things; market factors, company-specific factors, and randomness (this is an important contribution from academic finance; to bad they forget all about it when they moved on to other forms of risk analysis).

2008, the period that supplies Max DD for those of us who test that far back, was almost entirely due to market factors and randomness. And we all know, in retrospect, what those market factors were; evaporation of liquidity and credit due to problems in the banking sector, primarily in real estate etc. etc. etc.

Randomness is more a convenient word than a genuine phenomenon. It simply refers to factors we can’t effectively measure and bring into our database. In 2008, a big item that fell into this category was “Who owns the stock and how badly do they need to raise cash?” Oddly, shares of better quality companies were the ones best able to attract reasonable bids and, hence, were the ones most aggressively sold by strapped hedge funds, etc. Those of you who were here at the time should remember how frustrated we were about that, and forum threads with titles like “factor reversal.”

The third factor, of course, is company-specific characteristics; stability of the business, balance sheet, size (particularly insofar as it provides a cushion for survival and internal diversification), etc. We can even count stock valuation. In 2008, this accounted for very little, if any, of the big drawdown. (In 2000-02, on the other hand, valuation was a huge factor.)

2011, the second biggest drawdown many of us see, was also one in which company factors played little if any role. This was primarily a market-based thing, a new-driven crisis due based on near simultaneously blowups in the Middle East, the Euro and the D.C. budget situation.

UI, Sharpe, Sortino, Beta, Standard Deviation, Value at Risk, Information Ratio, etc. do not address any of those three relevant factors and were useless to cue investors ahead of time to the drawdowns to which they were exposed going forward. You can even see that on the stockcharts demo charts. Like other metrics based entirely on a statistical report card on stock price, UI was great at flashing warnings – after the stocks had already tanked.

That said, we really, really do need to be having some serious conversations about risk management. Rising interest rates (not rally a matter of “if” but now more a question of “when”) do not have to send stock prices lower. While it is true that higher rates will one way or another boost the denominators in valuation formulas (thus pushing valuations down), the conditions that would lead to higher rates (stronger levels of business activity) would also push numerators up (thus adding to valuations). So actually, there really is no clear-cut answer as to the impact of rising rates on stock prices. But, but, but there can be a monstrous difference between “should” and “will.” Popular financial rhetoric suggesting rising rates must be bad for stocks has been so widespread for so long, I fear there are way too many in the investment community who rightly or wrongly believe it and who will act on that assumption.

So we need to really think about risk. And we should not lull ourselves into false senses of security we might get from historical share-price report cards accumulated under conditions that may not resemble those that most threaten to cause us financial pain. (Actually, on Monday Marco and I were discussion the sort of metrics I’d use to screen for R2G models and one of the things I said was I’d eliminate R2Gs that didn’t show AT LEAST 50% max drawdown; I just assume if someone is showing less, they’re using a curve fitted timing model). I know we all hate drawdown. But what we want is not what’s important. It’s what we can deliver going forward.

Risk control in the future will take some creativity. Forget Wikipedia. Forget StockCharts. Forget Investopedia. Forget all that other stuff. What’s out there doesn’t work. (I’d have thought people would have figured that out after 2008, but I guess not). Think creatively right here on p123. There are a lot of smart people here and frankly, I’m a lot more interested in what can be invented/created right here then I am in rehashes of the same nonsense that has repeatedly been shown to be just that, nonsense. I encourage users to work individually and/or to trade ideas in the forums.

We can’t do anything about the randomness factor. That’s something with which we’ll just have to live.

As to market factors, I suggest trying to work with the economic data we added to the platform most recently. It won’t be easy because we have so few datapoints that reflect rising rates and/or bear markets based on economic fundamentals. But the well is not completely dry. So let’s all put on our thinking caps and see what, if anything, we can figure out with the resources we have.

Researching company factors likewise runs into downside sampling challenges. The well isn’t completely dry, but our cups aren’t exactly running over. But we do, at least, have a really terrific company toolbox with which to work. I suggest we try to develop stock strategies that mitigate stock-specific risk by modelling fundamental factors most associated with bad stock action.

I presented a sample approach earlier this year in this post: https://www.portfolio123.com/mvnforum/viewthread_thread,8333#43221.

For something more general, perspective, you may also want to check this paper: http://www.econ.yale.edu/~af227/pdf/Buffett%2...ller%20and%20Pedersen.pdf.

The paper latter talks mainly about how use of minimal-cost leverage enhanced Buffett’s alpha but alos addresses his stock selection approach.

Those who do want to try to get a handle on risk will need to be a bit scrappy. We won’t find everything we’re looking for on the p123 platform right now, so we may have to adapt. (We can’t really spec and offer something that hasn’t yet been invented.) For example, in a backtest sortino or standard deviation might be our main dependent variables, not return or alpha. And we may have to chop up out test samples for better visibility in seeing which before-the-fact fundamentals are associated with more favorable levels of future sortino, for example. But what the heck: I’m sure the Jobs family garage wasn’t the most ideal workshop for young Steve Jobs and his pal Steve Wozniak. So let’s work with what we have right now. And as we come up with things that could conceivably be added to the p123 platform, well, you’ve seen our track record in this regard. :slight_smile:

Post # 2 from David


I like these threads. Informative. Talking about future risk:

  1. I don’t know how to model this in P123 but I have a feeling that rising interest rates will really hurt companies that have taken on a lot of long term debt to fund stock buybacks or acquisitions or just operations (like in Oil and gas). I screen now for (non-financial) companies that have lower long term Debt to EBITDA, Debt to FCF or Debt to Working Capital. I also look at companies that have negative total equity.
  2. I have a feeling that the Health Care industry could be in a bubble. They are considered a defensive sector but they have been doing well as a sector for a long time. If something changes at the government level, that could affect the sector very quickly in a negative sense. I don’t want to talk about politics but when I see a ‘defensive’ sector on a tear like this, it is not good. I think there is a black swan hidden here…

Post # 3 from me:

I like these threads. Informative. Talking about future risk:

Me too. I hope we can see a lot more of them.

  1. I don’t know how to model this in P123 but I have a feeling that rising interest rates will really hurt companies that have taken on a lot of long term debt to fund stock buybacks or acquisitions or just operations (like in Oil and gas). I screen now for (non-financial) companies that have lower long term Debt to EBITDA, Debt to FCF or Debt to Working Capital. I also look at companies that have negative total equity.

I have a couple of thoughts on the balance sheet. Given where rates are and where they have been, could we not argue that LT debt is a good thing, and also likewise for recent issuance of equity. If we think the cost of capital is going to rise, as would be the case assuming interest rates go up, might we not want to favor companies that have already stockpiled or continued to sit on surplus capital. At the same time, companies that will need to go into the capital markets (those with lots of ST debt or those who bought back equity and may find themselves in the need to tap capital markets in the future) be disadvantaged?

Whatever we do with debt, we’d need to net out cash balances (rising rates would increase the return on cash).

We might also consider looking for companies that have spent heavily (overspent even?) for cap ex and acquisitions. They arguably would feel less need to do so going forward, as capital costs rise.

We couldn’t test this simply by building and testing a ranking system because rank tests use stock return as the dependent variable. But we might create a ranking system and then run two screens: In screen A, we say something like Rank>80. In screen B, we say Rank<20. In each case, accept all passing stocks (set Max. no. at 0). Then, compare StDev, Sortino, Beta and Sharpe.

we’re not looking to simulate portfolios here. we’re just testing ideas to see what might be associated with future volatility.

We may also want to do this as a series of short-time spanm tests; 200-02, 2007-08, 2011, summer 2013 (important given that the market acted then on fears of rising rates).

  1. I have a feeling that the Health Care industry could be in a bubble. They are considered a defensive sector but they have been doing well as a sector for a long time. If something changes at the government level, that could affect the sector very quickly in a negative sense. I don’t want to talk about politics but when I see a ‘defensive’ sector on a tear like this, it is not good. I think there is a black swan hidden here…

That’s a different issue and you may be on to something. I, too, have been wondering about this. But there may be some interesting pockets of the business where demographic pressures (aging populations) may trump rotten insurance reimbursements.

Post # 4 from David:


Good point about debt. If companies have already loaded up on it at low rates (and they don’t have adjustable rates) + can use it effectvely, then they are advantaged against others in their same industry/sector that have not (but will need to do so in the short term). As long as they can handle the debt level and they get a decent return. I know there is a cost to selling new equity but I look at debt more because common shareholders don’t, of course, have any legal claim to assets.
Related: I am reading Tortoriello’s Quantitative Strategies for Achieving Alpha and created this formula, higher is better (according to him):
((EqPurchA-EqIssuedA)+(DbtLTReducedA-DbtLTIssuedA))/AstTotA)
Tortoriello likes to see both long term debt being reduced as well as number of shares going down, as a percentage of total assets.
But what we are talking about now is the opposite of this, at least in the short term. So lower might be better over the next couple of years…

I also came up with his ROIC:
EVAL(EqTotA>0,(EBITDAA-CapExA)/(ComEqA+DbtLTA+PfdEquityA+NonControlIntA),NA)
For ROIC that uses new debt, at lower rates, then this formula needs to be looked at differently as well.

All of this should be in a different thread. I don’t want to hijack this thread on the Ulcer Index.

But a timely topic. ‘Don’t just look at risk in terms of yesterday’s events, but tomorrow’s as well’.

And the final post imported from the other thread, from steve:


and UI is just another instance of, pardon my language, mathematical mas…

I was going to make a joke about Pee Wee Herman being a fundamentalist but usually my jokes are misinterpreted. So anyways Marc, just keep your hands above the table where we can all see them, mathematically speaking :slight_smile:

I would like to give my 2.4 cents Canadian on the topic of risk.

It seems to me that (forward) risk is a function of leverage. Risk, like leverage, isn’t about external events. External events cause stock market corrections. But without leverage that’s all they are… corrections. To go from a market correction to an out-and-out bear market one needs leverage. I suspect it was no coincidence that in 1929 investors were able to borrow 10x their capital for investment while the stock market eventually lost 90% of its value. Leverage amplifies an existing condition, whether it be good or bad, but it isn’t the root cause of a market collapse.

I saw a graph recently that showed general market leverage versus the S&P 500. (Unfortunately I can’t find this now). The amount of borrowed money for equity investments was at an extreme high immediately before the market collapse in 2000, 2007, and to some extent 2011. So borrowed money can’t tell you when a market crash will happen, but it may tell you how far you should expect the market to fall, and how much general market risk you are incurring.

And it is possible that the 2008 “factor reversal” that Marc mentioned, was due to hedge funds de-leveraging. Or at least that is my theory. So to some extent this phenomenon is explainable and something to look out for in the future.

There are several kinds of leverage and to assess risk one must account for all the leverages. One kind was already mentioned, how much leverage do hedge funds, ETFs, commercial, professional investors have.

Another type of leverage is the how much margin is the individual investor employing. In this case I’m not referring to market neutral investments, but investments that are not hedged.

Another type of leverage is the equity multiplier (financial leverage) incurred by companies in one’s portfolio. As mentioned in a previous post, this will be a problem if interest rates do in fact rise. (I am in the camp that thinks they will never rise in our life times except for one token face-saving increase this year.)

There may be other types of leverage that I am not aware of. If you consider all of these leverages then you will have some idea of your portfolio’s forward excess risk. I say “excess risk” because there is still the risk that a company will stumble, crooked accounting, lack of diversification, etc.

The other point that I would like to make is that the more borrowed money there is applied to investments, the further away from “the truth” we are regarding company fundamentals. This can work for the fundamentalist or against him/her.

Steve is definitely right about the leverage issue. It’s relevant for individual companies and, as he discussed, it’s relevant for assessing the economy/market as a whole.

Company leverage is, of course, easy to assess in p123. We have, obviously, the debt info.

We should also try to tease out operat6ing leverage (the role of fixed costs relative to variable costs — separately from interest expense). This can be hard since GAAP doesn’t include this sort of reporting; it’s strictly a function of internal accounting. We may have to create some mega-boolean factors that assign Y/N to companies based on GICS exposure. Not sure . . .

The sort of leverage about which Steve spoke would, under my three part framework, come under market factors. I’'ll look again at our existing economic presentation to see if there’s anyway we can get at it from that, and if not, I’ll hunt for anything else out there that could feasibly be added in the near term. This is important so even if we can’t get at it directly, it would be worthwhile to try to figure out a way to approach it indirectly.

I would like to provide some more food for thought, this time for the issue of diversification. This issue could be a little controversial because some people prefer to address diversification at the book level, and they are perfectly happy with non-diversified models. In any case, I believe it would be possible to come up with a diversification metric that could be applied at both the port and book levels.

Diversification should be an input to the risk model. When a portfolio is stressed by external factors, a diversified portfolio should in theory react better than a non-diversified portfolio. Well actually that is not true, a non-diversified portfolio may perform much better if it is optimally positioned for the external factor. The non-diversified portfolio may also perform much worse. This issue is similar to leverage in that future results may be great or terrible, but the one thing we know is the risk is higher than with a well diversified portfolio.

Anyone who has done a lot of portfolio design knows that if you start with acquisition rules that provide some assurance of a diversified port, you will likely be disappointed with backtest simulations. If you have any doubt then try to design a port with SecWeight < 11 (Sector Weight less than 11%). Yes it is possible to design a port with this constraint, but good luck generating superior backtest results. The same can be said for MaxCorrel(50,1) < 0.3 (maximum correlation of stock with the existing stocks in the portfolio is less than 0.3). The diversified port will have substantially lower backtest performance, but may have less risk going forward.

Because of the obvious difficulties designing diversified portfolios, designers turn a blind eye to this issue, and instead build models with a narrow focus that give mind-blowing backtest results, but in so doing the port dances and weaves around market troubles. This is possible because a narrowly focused port is more agile than a diversified port. Hence the designer is using market timing to dodge disaster in his (or her) backtests. The designer may not even be aware that he is doing so.

So what I would like to throw out as a possible metric is to calculate the correlation of each stock to the overall portfolio through time. Then the correlations would be combined with the lower combined correlation being better. I defer the formula of how the individual correlations are combined to a statistics major, someone more aware of potential issues than I am.

Then we can go one step further by dividing the correlation metric into the model performance:

performance metric = actual performance / correlation metric

Thus the actual performance is discounted by how correlated the individual positions were throughout time.

Steve

Interesting idea, Steve. It reminds me of the concept of “maximum diversification” (see link below), which is similar to minimum variance approaches in that it focuses on risk. Of course, both maximum diversification and minimum variance depend on historical inputs (correlation and volatility), which can be problematic.

http://www.tobam.fr/wp-content/uploads/2014/12/TOBAM-JoPM-Maximum-Div-2008.pdf

I agree that more forward-looking measures of risk might be helpful, and the best example would be “quality” measures like company leverage and profitability. Other than that, using top-down macroeconomic metrics might also be helpful, though I am skeptical that small investors like the P123 community would be able to have the same level of success as the current focus on illiquid and small-cap securities, which are uninvestable by large institutions.

Marc raises a good point that traditional risk measures (e.g., Sharpe, volatility, tracking error) can be flawed, though I still think they can be good tools, as long as one understands the potential limitations of these metrics, particularly how they can be skewed by curve fitting and market timing.

Another area of risk management is vulnerability (or sensitivity) to movement of different asset classes. Ports may be quite sensitive to:

[]Cash - currency exchange rate (i.e. importing versus exporting companies),
[
]Bonds - interest rates (leverage and other companies fundamentals),
[*]Commodities - price of oil, etc.

Correlation metrics (asset versus port equity) could be established for each of these areas. It is important to ignore in-sample figures because the backtest optimization renders the figures meaningless (in my opinion).

I think there is enough OOS data on many R2G models to make this a meaningful metric.

Steve

Correlation metrics versus various risk factors is quite advanced, and would be similar to the approach that Barra or Axioma uses for risk management. This capability would be great, but is a huge project and would likely require a lot of resources from P123.

Over the last 4 years I have been concentrating on developing high performance non overlapping 5 stock Ports that have low correlation to each other. By combining these Ports into a Book, the Book has significantly lower risk than the individual Ports with much higher performance than any diversified single Port I have been able to develop.

Steve’s idea of a diversification metric would be very helpful for comparing, say, four low correlated 5 stock Ports in a Book to one diversified 20 stock Port. I currently trade 11 Ports with no good way to evaluate their total return and risk except for combining them into 2 Books and looking at how close they are to a straight line in the log chart, or downloading all 11 Ports to Excel and calculating risk & return there. The straight line approach in a book is not very useful for comparing one combination of 5 Ports to another one without actual values. It is not oblivious which is better just looking at charts. Downloading Ports to Excel is a pain when trying to compare many port combinations.

The UI & UPI would also be helpful. I was able to use them effectively back in the '90s while trading between Fidelity Select mutual funds. Although they were able to show me which fund was best in the past, they were a little to slow to effect good timing points for switching between funds.

“Correlation metrics versus various risk factors is quite advanced, …”

Alan - I’m not sure what the big boys do, but I was thinking of something simple like the correlation of the port’s equity curve versus the price of oil, US Dollar, or US interest rate. This is easily done and (I think) would have value. As I said before, it would have to be out of sample to be meaningful of course.

Steve

"Of course, both maximum diversification and minimum variance depend on historical inputs (correlation and volatility), which can be problematic… "

Alan - that is an interesting article you dug up. But unfortunately, I’m not a mathematics PHD so it would probably take me years of excess free time to digest what is being said.

So in my simple mind I prefer to think this way: the correlation between stock holdings on a historical basis provides insight into the functioning of the portfolio, even if it is historical. It is not a measure of performance such as the Sharpe Ratio, and is not a method constructing a portfolio (as the article implies, I believe). It is simply a determination of how “focused” (or agile) the portfolio is. If a portfolio is focused then the performance of the model should be discounted accordingly as it is easier to data mine or over-optimize. So I don’t think that use of historical data is an issue in this case.

There is an added benefit to this concept in that high frequency market timing will show up if the metric is designed to account for it. i.e. if the switching in and out of securities is part of the correlation algorithm then the high frequency timing will translate into highly correlated security positions.

Steve

Hey all, how about we try this: a publicly available-editable google doc that keeps track of ideas being put forth – so they don’t get lost under the discussions.

I set one up here:
https://docs.google.com/document/d/1XPNS2huhShPF9vvaCDZ2-Qxpbr9Oao9Y2zfgfWc_q0s/edit

Marco, Paul – Stop laughing! (They know how much I really hate all things Google Drive, but it does seem to work well for this.)

I started it with some of my own ideas and ideas others put forth so far, although Steve may want to edit the references to his suggestions re: leverage and correlation.

This doc is just a brainstorming list; no data specs, no formulas. I figure when somebody wants to start getting into detail on any particular factor, a new Google Doc could be started just for the factor; i.e., a Google Doc called “Risk Indicators: Portfolio Diversification,” “Risk Indicators: Company Leverage,” etc., etc. etc.

Does this approach seem like it might be productive?

Marc - I can’t edit the doc without making a copy. Is there a way around this? I’m not familiar with Google Docs.
Steve

I think I’m going to need a training course in google docs :slight_smile:

(1) I can view the original document but can’t edit
(2) I made a copy and edited the copy. But there is no save button.
(3) I’m trying to share the edited document but apparently the P123 group wasn’t “copied”. There doesn’t seem to be a way of assigning a pre-existing collaboration group…

Help ;(

All:

  There's risk and then there's risk.  A large asteroid impact (or any event of similar magnitude): Even if I survive, maybe by biggest concern is not what's going on in my Ports.  Maybe, no market, period.  For everything else, history provides a guide.  

  So, the place to start is with historical simulation.   Take a nice long time frame.   Plot Ln(#Equity) vs. time.  Go ahead and plot a least squares line though that....

Bill

P.S. FWIW: R is helpful and free. GLTA.

I am surprised there is no mention of Taleb or Black Swan in the thread so far. Sorry, could not resist the temptation

Oops, I screwed up and checked off anybody being able to view. I switched it to anyone can edit.

This is not a conventional approach to risk, and I’m looking for something new because we’ve seen the conventional approaches repeatedly fail.

I’m rejecting statistical analysis of historical share returns because historical shares have no causal relationship with potential future losses. Have you heard the phrase “lies, damned lies, and statistics?” This is a case in point. Statistical analysis is fine if and only if the models are properly specified. If the models are mis-specified, statistical analysis accomplished nothing more than creating confusion (at best) or disaster (at worst).

Here’s a hypothetical example.

Proposition – Assume it is properly and effectively demonstrated that the probability of pedestrians being struck and killed by automobiles is very low.

Proposed Behavior – Based on that probability, one may confidently walk as and when wished across any kind of thoroughfare confident in a high probability that nothing bad will happen.

Result – Pedestrian who suddenly runs into high speed traffic is struck and killed by vehicle traveling 50 mph.

So is that a black swan event? Those who advocate for use of historical returns as measures of risk would have to say “yes.”

In this thread, I argue its bad statistics; the mis-specified model. It’s not correct to tabulate the probability a pedestrian will be struck and killed by an automobile. One must assess the probability that a pedestrian who dashes onto an interstate will be struck and killed by an auto. The probability in the former model would be very low. The probability in the latter model would be very high, close to 100%. In both cases, the statistical analysis may be equally capable. The difference is in the model; one is correct, one is wrong.

This isn’t such an out-of-line example. Do you really think the financial community lacked quant capabilities heading into 2008? Not at all! So-called Rocket scientists were all over the Street. But not being sensitive to the underlying fundamentals of the sectors which they addresses, they failed, for example, to distinguish between historical mortgage default rates and VAR as a whole, versus mortgage default rates and VAR among mortgages where the monthly obligation was X% greater as a proportion of disposable income than the historic norms.
Historical analysis is fine – IF AND ONLY IF the model is properly specified.

Assuming a stable market, I can make a good case for testing the historical experience of 30%-plus drawdowns among companies that are very small, very leveraged (operating and financial), in very cyclical businesses, suffering weak levels of liquidity, etc. I cannot make any case for testing the historical experience of 30%-plus drawdowns of stocks that had such drawdowns at various times in the past.