Thanks P123 for a good first 6 months. Looking forward to the next 6!

It helps immensely to start using a new (to me) tool successfully, and Portfolio123 has delivered.

Over the past 30+ years I’ve been reasonably successful in beating the market indexes by analyzing individual stocks for fundamental quality and value and long term holding (1 year or more). However, I did not have any way to judge when to sell and rebalance, and my method of stock selection meant spending a week or more (20+ hours) evaluating what stocks to sell and which ones to buy while working overtime in my day job. Consequently, I rebalanced very infrequently, held many stocks well beyond the time I should have sold them (in retrospect), and rolled with the ups and downs of all market cycles.

Recently retired, I did not look forward to using my old methods more frequently but needed to manage these funds more actively. I’m Scotch-Irish. The Scotch in me is cheap, not willing to pay someone else to do something if I can do it for less. The Irish in me is da–ed proud of that! The tools I used before for basic screening were free, but over the years each of them either disappeared or began charging subscription fees, and none of them gave me all the data and tools that Portfolio123 does. I needed a way to reduce the mental effort and semi-automate the process so that I would be forced to rebalance. I also wanted a routine method to use that I could easily train someone else to follow, in case I was no longer around to manage assets my wife would depend on.

Enter Portfolio123. I hesitated at first due to the cost of a basic membership but after the trial period felt there was a reasonable chance of it paying for itself many times over. I still consider myself basically a buy-and-hold type of quality/value investor and don’t want to rebalance more than once a month or consider microcaps. For my purposes, a modified version of the Piotroski ranking, average daily total trades of $5 million or more, and the All Fundamentals universe except OTC form the basis from which to find the quality/value stocks I targeted for my first test portfolio of 10 stocks. After six months of actual investing, the portfolio is up 21.07% after trading costs (about $10 per trade) compared to the SPY ETF return of 8.31%. The Sharpe ratio is 2.1 and Sortino is 3.89. You probably won’t see my portfolio(s) in the weekly highlights email for highest returns, but it is accomplishing what I want and need. Aside: I was initially confused when the portfolio’s first rebalance happened, because it called for fewer stock trades than the same model as a screen, so I sold and bought based on the screen. Had I followed the portfolio advice (hold until a sell rule is triggered, versus the screen’s always hold the top ranked), my return would have been 15.1%, Sharpe 2.38, Sortino 3.63, and turnover would have been less. In the long run, holding until my sell rules are triggered seem to work at least as well as holding only the top ranked stocks.

I look forward to the next six months and possibly a chance to test a market timing rule I hope will act reasonably well.

The ability to semi-automate the process, and perhaps automate it in the future, is very helpful for me. Philosophically, I don’t believe in attempting market timing with technical analysis approaches but do believe it is possible to some extent using fundamental macro-economic data. The Fed Model is a start, but not the final word. Using macro-economic data should allow for better in/out of market decisions and to that end I would love to see more of that type data, or a way to include it within Portfolio123 analysis from our personally maintained data series.

Again, thanks Portfolio123 for a very nice platform that has allowed, and hopefully will continue to allow, me to beat the market at minimal cost!

Bob Galloway

Bob,

Congrats on a great start! Thanks for sharing. Sounds like you invested a lot of hours a week for decades? Should be a lot of learning and experience there to build on. Look forward to hearing more in 6 months. Curious how your thoughts and systems evolve as you build more systems.

:slight_smile:

Thank you, Tomyani. I intend to give an update in 6 months, whether results remain good or not.

My methods are changing significantly. Before, after screening stocks for a few quantitative comparisons, I would read company SEC filings for information not reflected in the numbers, check news articles and grade the company and industry news as negative or positive, mull over the potential for business cycle changes and lawsuits, second-guess growth estimates, etcetera. Very time consuming and not always helpful.

I now want to simplify with just a quantitative approach. It will not always be correct, but it only needs to be significantly better than throwing a dart at a ticker list. I feel confident that Portfolio123 provides enough capability to do just that.

Hi Regallow and welcome to p123.

I want to chime in with a cautionary note - p123 is a great tool, but don’t get carried away. It is very easy to make backtests that show monumental returns. I think your approach is wise, taking a pre-existing, and well-proven ranking system. I would of course, suggest having more holdings, typically at least 20.

To use this tool properly, take it slow. Remember to remind yourself that the great investors have achieved 20-30% annualised return over the long run. There may be advantages to being small - Buffett has said he thinks he could do 50% per annum on a $1million portfolio. To my mind this indicates what are realistic targets.

I think it can be an unfortunate problem that new subscribers (and this happened to me too) are seduced by the fabulous returns shown by various backtests, and ignore the less good, but still very achievable and high returns of more “tried and tested” models. In other words, if you see fool’s gold, you may end up on that path, and not stop to look at the real gold.

All the best,

Oliver

Hi Oliver,

I agree absolutely. This portfolio is a test and I have allocated less than 10% of funds to it. I intend to expand the holding count to at least 20 stocks if I increase the allocation, either with the same model or a complementary one.

My target is 20-30% annualized, and I would be satisfied with perhaps 10-15% alpha. A backtest of this portfolio since 1999 shows (27.16% with my market timing, 21.66% without) compared to SPY’s 4.03%, and I would be thrilled if it performed nearly that well going forward. Since I’m now retired, the importance of capital conservation has grown and I place more importance on standard deviation, beta, and the potential for market timing rules that limit drawdown.

If being small is an advantage, I have nothing to worry about in that regard. Large funds definitely have problems finding ways to put money to work, and it leads to some market inefficiency in individual stock prices at all market cap levels. Buffet tends to look for underlying value in the quality of management, having an edge in the industry’s market, and long-term growth potential. These are difficult to analyze using a monthly rebalanced quantitative approach. I now tend to look for companies with good fundamentals that will hopefully attract an incrementally larger following in a shorter time frame.

I expect to do my own analyses and don’t use momentum or any other technicals. I congratulate those who use them successfully but I feel more confident trying to tweak the fundamental methods that have been most successful for me in the past.

I really appreciate all of the opinions and information provided by yourself and others in the forum. It provides a wealth of ideas to test and will keep me busy for a long time.

Bob

regallow
Just checking in as it has been more than 6 months and see what learning points you might have?
Cheers

jpetr,

The port was started 4/12/2013, so it has another 3 months to go before the second six months of results are completed. Through 280 days, the after commissions (port/SPY) total return is 25.06%/17.45%. Annualized return 33.87%/23.35%. Sharpe 1.68/1.53. Sortino 2.74/2.16. Obviously, the overall market has been very good in this period so what happens to the model in a real downturn remains to be seen. Roughly comparing results to higher market cap R2Gs that have a similar launch date and public P123 ports for the same time period, I’m satisfied that it is performing reasonably well for my first effort. It’s “OK” for now. But I would like to reduce standard deviation (18.71/13.58) and beta (1.07) which may be difficult in this port without impacting return significantly.

That said, here are my thoughts about the overall experience so far.

For a while, I couldn’t keep my hands off the model! It’s a learning experience for me, and the first trial. So every time a significant loser stock crept into the mix, I looked for ways it could have been avoided. I’ve tweaked the model 3 times with minor adjustments that will hopefully remain positive in impact. For now, I’m tweaked out. I suspect most model designers will go through a similar post-launch reaction.

It has been very helpful to have the automated analysis of buy and sell signals received by email. This is a key benefit of Portfolio 123 for me. Regardless of what else is going on in your life (and the last few months have been very disruptive for me), it’s very helpful to have “an assistant” perform analyses for you and hand you the results on a fixed schedule. My portfolio is set to semi-automatic, so I follow the sell and buy recommendation stock list and post the actual trades after they are done.

I am not interested in trying to squeeze every bit of incremental profit from each trade, so in screens and simulations I assume next day average of high and low price. My belief is that over time and many transactions any positive or negative slippage I may experience from average price will be minimal. Since screens don’t allow for commissions, I include a percent slippage that roughly represents the commission costs.

I still have a lot of learning and experimenting to do with this platform. My current focus is on ways to improve under-performing periods and analyzing the newly available economic indicators for regime switching and market timing possibilities.

Thank you for sharing your learning points. I joined last spring and thanks to staff like Paul and experienced users on this site, you can jump start your learning.

One question i do have for you and the advanced members is related to tweaking the model to the point that it is now over engineered, and may not come close to forward looking results. What are some ways that one can see if they have data mined tweaked to the point that the model will not be representative of actual performance going forward?

Here is one thing that i just noted.
One of the things that i did was to up my membership from screener to designer. I realized that the number of trades that i had was statistically insignificant and i was backtesting with 40-60 trades compared to now 200 trades going back to 1999.

In Port123 it is easy to think you understand something only to find out you are mistaken later. Guess you can call that the conscious competent compared to conscious incompetent which classified me well before.

By the way, i realize that i will never reach the competence level of some of our advanced members (nor do i want to) and so include ready2go portfolios from members like Denny to include in my book of portfolio.

This is a very important but extremely complex question. There are other threads that attempt to solve this problem, and I can’t propose a reliable test method. There are general guidelines that help, but the only true (partial) test is actual performance out of sample in a new time period going forward. I say partial test because every model is a probability function, and there is always a probability that even a 100% valid model will not perform to satisfaction in a new environment going forward, either for a short or long period of time.

In essence, we would like to know the probability of a function continuing to perform in the future, but the function itself is a probability that assumes the environment (market, economies, politics, commodities, wars, natural disasters, technology, you name it) in the future is represented by the past, our function properly accounts for it all, and no significant factors are ignored. That is impossible in my view. Some place faith in statistics about the function (how many variables versus degrees of freedom, how many trials were used, how many trades, and so forth. My faith is in the belief that we don’t even know how much we don’t know and that we tend to find reasons to believe (rationalize) that which supports our preconceived notion of what is right. Sometimes we use statistics to support our rationalizations.

I’m not pursuing an extreme return model. Those would likely have a different approach to prevent over-optimization. Guidelines I use in my own efforts:

  1. Use representative fundamental data as much as possible and technical data as little as possible. I believe it’s easy to rationalize patterns in price, volume, momentum, volativity, etcetera, and I have more faith in fundamental relative comparisons. Reversion to mean has some value, but what happens when a new mean occurs or another factor dominates?
  2. Model with at least 10 holdings (20 or more is better) on a 4 week rebalancing schedule. If it doesn’t perform well with 10/4 but does with perhaps 5 stocks weekly, then it may be overly selective and more likely to break down. This is a choice I make because I would prefer to find a model that is not highly selective.
  3. Check the data to see if NAs significantly impact rankings and filter those stocks out where possible (a bit fewer unknowns then remain in the pot).
  4. Attempt to find a model that does not significantly underperform an alternative investment or index in any year. After finding one, market timing can be applied. Sector rotation might help. This one is currently tough for the quality/value-based models I prefer. My best current model has one underperforming year that I’m working on slowly to resolve.
  5. Always include either slippage or commissions representative of what is expected. Never leave them out of a model.
  6. Test each rule’s effect for impact by varying its importance or value. Does the change in impact make sense? This should be done at each level (universe, ranking, buy/sell).
  7. Expect some failures. If it’s perfect, it’s not. Trying to remove all losers from consideration can easily lead to over-optimization.
  8. I have more faith in a model that passes more stocks than are chosen. For instance, if I want to hold 20 stocks but only 10 pass all of the rules, then my concern about over-optimization rises. This is a “spread the risk” concern that might be handled differently depending on your approach. Two models that each hold 10 unique stocks, for instance. Keeping a portion in cash when few candidates pass. Different approaches can be used.

If you haven’t yet, read some of the threads on this issue. There are other viewpoints and lots of knowledge that could be helpful.

The extra data is very important. I’m at the Lite level and can screen back to 99 but am limited to the latest 2 years on sims. That currently is OK for me. A limit I would love to get rid of is only one personal ranking system. I might have to upgrade…

agreed however there is some value in following divergences in RSI as an example. I gave up using technical indicators for reasons you mention and now use a weekly chart with 30 wma and volume as my litmus test. One other chart type i recommend a look at is point and figure charts. they were started in 1933 originally as a price recording approach which got turned into a chart with targets (up/down)

i have backtested my models and am a subscriber to 2 ready2go models. It appears weekly rebalance does work. Most of my models are positioned for 5 stocks with exception to one model and my subscribed ready2go models have 5 stocks. This may be due more to my search for value and lets face it, there is not much value out there. So why force it.

#3 -#5 i have not thought about - thank you!

jpetr,

Sorry, I didn’t mean to imply anything negative about weekly rebalancing. I backtest better performance with weekly rebalancing but I don’t normally develop the model at that frequency. My belief is that modeling first on weekly rebalance, especially with a small number of stocks, is more likely to find models with a higher chance of failure. Perhaps I’m too cautious?

There are a couple of other basic techniques that could be useful but I don’t normally take advantage of, perhaps to my disadvantage:

One is the use of EvenID which effectively splits the universe into roughly equal halves. From its Help description: “Use a rule like EvenID=TRUE while developing your system, then switch it to EvenID=FALSE to test the system out-of sample.” The two halves do share the same time periods, so it really doesn’t qualify as a completely out of sample technique.

Another is the use of Random as in “Random<0.8” which can throw out a randomly selected 20% of stocks that would otherwise pass the buy rules over the backtest period. This forces the model to use a slightly different set of stocks each run and could highlight models that are “fragile”.

Also, I didn’t take the time to point you toward specific threads that delve deeply into predicting future performance. If you haven’t already seen them, here are some I’m aware of:
From 2007: How to test for Robustness (and avoid curve fitting)
And 2 started in December 2013:
Proposed Study Framework - a mechanical model for … System performance
System failure: Can we predict in advance if a … system will fail?

I’m not as proficient in P123 usage as many are, and we all have different backgrounds and perspectives. In any case I hope this helps!

Bob

Said I would do it, so here are the results after one year. Although the portfolio was created 4/12/13 (a Friday), it didn’t invest until 4/15/13 so the true 1 year return is through 4/14/14 (today). I’m satisfied since the model is focused on larger stocks (ADT >$5 million) and it had a few changes during the year.



ElToro1YrRisk.jpg

Bob:

Thanks for the update.

Brian

regallow,

Nice results.

One thing caught my attention in one of your earlier posts; a remark to the effect you wish standard deviation would be lower. Be careful about getting too carried away with this sort of thing. Stocks that rise briskly increase your standard deviation; surely you aren’t regretting any of those.

The language of risk is very well articulated in the Financial academic community. But in a practical sense, I think the quality of the knowledge they’ve generated on this topic is very poor. Beta, standard deviation, sharpe, sortino, value at risk, etc. . . . all sound great on paper but are essentially worthless when you realize that all you’re really getting is a report card on what just so happened to have happened during a specific period in the past with pretty-much zero predictive capability.

This is a bigger topic than can be discussed fully in a post within this thread, but I believe that within the next fifty years or so, pretty much all the standard financial risk metrics in use today will wind up on the scrap heap and be replaced with an entirely new risk framework. I also believe that this new framework will most likely be expressed not in the language of “finance,” “statistics,” or “mathematics,” but in the language of accounting and fundamental analysis. (Many would be amazed at the quality and creativity of research coming out lately from Accounting academics,who’ve been eagerly jumping into the void left by Finance academicians who have completely lost their minds in mathematical nonsense; the sort of garbage that, among other things, told bankers that they were doing a good job controlling risk back in the mid 2000s.) If you’re using a value-quality piotroski-inspired approach, you’re probably doing far more sensible risk control than the current vocabulary is able to express. In fact, Piotroski is a perfect example: he’s not a Finance professor. He’s an accounting professor who is on the review board of three accounting research journals and entitled the paper that produced the model that inspired you “Value Investing: The Use of Historical Financial Statement Information to Separate Winners from Losers.”

Marc,

I found this comment interesting: Finance academicians who have completely lost their minds in mathematical nonsense. I do not think that math is the problem but there are far too few that know how to reasonably blend math and finance. Many in finance throw around mathematical terms that they really do not understand to produce over optimized garbage research which leads to poor out of sample investment performance.

Scott

I wasn’t referring to over-optimization. I think a lot stems from their starting “IID” (identically and independently distributed) assumption. We’re dealing in the stock market with companies, businesses; not particles that can be subjected to markov series, brownian motion, etc.

Let’s consider something simple like Beta. Work with All Fundamentals. Look at p123’s three and five year betas, and also create one of your own with the beta function. Our beta math/logic is completely correct. Yet you’ll see many betas that are indisputably ridiculous – but the numbers have to be what they are because that’s how the sticks moved during the respective measurement periods relative to the market.

From the accounting side, however, there is some fascinating research that ditches capital asset pricing model and beta and uses fundamental factors to forecast future costs of equity. Show a finance PhD a 10-k or a fundamental ratio and they’ll go “yuk, phooey” assuming they even know what they’re looking at. Show the same thing to an accounting PhD and they’ll go “cool, I can do all sorts of great things with that.”

If you really want to see something interesting, download and read Piotroski’s paper. Notice how much effort he devotes to practically apologizing for his “heuristics” (factors selected on the basis of human judgment). Not sure who was giving him grief while he was working on his paper, but today, we’ve heard of and make money based on what we learn from Piotroski; as to his presumed critics, they live on in obscurity and probably haven’t helped anybody make a dime.

Thanks all.

Marc - [quote]
One thing caught my attention in one of your earlier posts; a remark to the effect you wish standard deviation would be lower.
[/quote]
It’s not my main driver but a goal, std. dev. lower than the benchmark. The portfolio above did not do that in the first year, but one year is only a beginning. I’m testing a related 30 stock model that backtests a better return, lower drawdown, higher Sharpe and Sortino, and lower standard deviation. But it’s std. dev. is still higher than the benchmark, and I would like to reduce it.

If there is no predictive ability in the historical risk statistics of a model, then I should put my money under the mattress or choose the model with the highest return. But then I would be professing belief in the predictive ability of one historical statistic (return) and ignoring relative comparisons of other stats. Why one and not the others? So I choose to believe there is some relative prediction ability in the risk measurements, imperfect as they might be.

I had to grin when I read your words, because I see very strong parallels between these concerns and what I would say about the last 50+ years of theoretical particle physics. A similar battle played out between pure math based on probabilities and approaches based on field structure models. For now, pure math has won but some (including me) believe the approach used has missed the truth. I am just as frustrated about that as you seem to be about portfolio theory. In short, it might take more than 50 years for your vision to come true! For what it’s worth, I agree with your general position.

I would be interested in the accounting side fundamental factor research you alluded to. Got any suggestions?

Bob

Actually, it’s very easy to see that these stats have zero predictive ability. Just create some reports in p123 that show beta, etc., and then repeat the exercise for a whole bunch of different as-of dates.

And it makes sense. There’s no reason to expect predictive ability. Imagine visiting a doctor because you’re feeling severe knee pain. The doctor does not ask about any history (i.e. Did this come about while participating in sports?) nor does the doctor take an x-ray nor does he even do a manual examination. Would you follow his recommendation? Or would you go to see a different more competent doctor, one who investigates to determine the cause of the pain and prescribe treatment (cast, surgery, alleve, etc.) based on the cause of the pain.

Beta is the incompetent orthopedist. It attempts to prescribe a treatment based (overweight, underweight, avoid, etc.) based solely on symptoms recorded by the naive observer and with no effort to determine causes (company size, cyclicality of the business, percentage of operating costs that are fixed, balance-sheet leverage, etc.). Risk is not about volatility of returns. Volatility is a report of symptoms (My knee hurts. My stock bounces around too much.) Instead, risk is about the characteristics of the company (My knee hurts because I tore my ACL. My stock bounces around too much because the company is an airline with a monstrous degree of cyclicality and a humongous fixed-cost operating profile and insane leverage.) The characteristics of the company tend to persist, so an analysis of these characteristics can give you a sense of the level of risk you’re assuming. But beta, sharpe, etc. . . . it’s just a report of symptoms on a specific day. If my airline stock falters at times when the market rallies, it might wind up with a very low, or possibly even negative beta, suggesting conservative investors should overweight the heck out of it. Does that make sense? But mathematically, that would be a completely correct response.

We absolutely can analyze and forecast risk, but only if we act like competent orthopedists. We need to take the x-rays and look at the underlying factors that give rise to what we observe in the moment.

Piotroski is a perfect example of guy who did this. Buffett does this all the time; he typically pokes fun at quants and their toys and moderates his risk by investing in what he regards as “inevitables,” companies with certain kinds of fundamental chartacteristics.

I’m out of town now, but when I get back, I’ll give you cites for some academic work you might find intriguing.

ROFL. You mean the geeks suck at physics too? You made my day! :slight_smile:

Bob,
Thanks for your testimonial regarding the value of the Portfolio 123 membership and benefits. I am convinced that this could be what I have been looking for as a method to grow our retirement funds to meet our financial goals over the next 10 years. My thought was to use the Piotroski model as you have done, but I am intregued with the approach that others have suggested of using another portfolio model in tandem to bring the number of positons up to 20.

I’m currently subscribed as a guest, and will most likely subscribe as a Screener to gain multiple books and receive the email trade updates. Any thoughts on what you considered your 2nd best choice for a RTG model?

Also, I am somewhat surprised that the general consensious among members is to forget attempts to use technical analysis to time the market. Have you eliminated any timing considerations in the six month of results you posted?

Thanks,

Doris

Doris,

I think you are wise to look at P123. I’m sure Bob will add his ideas below. I do not know what the consensus of the members is but there are a lot of models with at least some market timing available. You might look at “Chaikin with Market Timing” for a model with a lot of technical analysis. The author is mgertein who posted just above your post. And it is free.