Is this something bigger ?

The markets are sharply lower in the last few days.
Trade wars and Iran conflict are taking their tolls.
Trump crazy enough to do irrational things.

What are the timers saying and do you take any measures, if any (going short, raising cash) ??
I noticed new highs are not confirming the recent top.
As for myself I have raised some cash, but am still long.

I’ve developed a series of complicated time series based market timing formulae utilizing machine learning/neural networks which have been thoroughly backtested against datasets dating back to 1867 that currently signal “Sell In May, Go Away” :stuck_out_tongue:

We are likely going to see more volatility ahead of us. Other than that my market timer combo doesn’t ring any alarm bells.

Very cool that you can program neural nets. And make it work!!!

-Jim

lol

Steve,

Could you explain what you do that is far superior to neural networks?

As I recall your system is very complex (as neural nets can be too) which can be a problem. Well, complexity is a problem unless you have data going back to 1867 that is. The problems of overfitting do disappear if there is enough data—assuming the data is stationary or at least ergodic and adequately mixing.

You must already know this if you are in a position to be so dismissive of something you use so often. If you use GOOGLE search or Siri or just about anything not powered by coal you use neural nets many times each day. I am assuming your knowledge is so advanced that you have a clear mathematical understanding of the limits of neural nets that you can simplify to our level.

The search does not seem to be working now so I will depend on you to explain your method again.

Can you share with us how you avoid overfitting with your complex system? Do you use time-series information?

Seriously, what method of out-of-sample testing or cross validation do you use? Do you think time-series data can be as useful as cross sectional data? I assume your method is a good method and maybe you can help us without the sarcasm (lol was sarcastic I assume but correct me if I am wrong on this).

-Jim

Jrinne, he was just responding to my post, which was just a joke.

But, seriously, I think geov and some others outside of P123 have done some interesting stuff on seasonality. Its a thing, even though I don’t think anyone really knows why.

Thanks.

Too bad it was just a joke.

But time-series are tough. I can see why you have not tried it in a serious way. Although I think you could if it became an area of interest for you.

-Jim

Seasonality of stock market:
The seasonality of the S&P 500 is easily verified. The S&P 500 with dividends from 1960 onward returned on average 1.92% for the yearly six-month periods May through October, the “bad-period”. For the other six months, the yearly “good-period”, from November through April, the average return was 8.47%.

I have quantified this now with Likelihood Ratios. .
The time over which I tested this is from January 1960 to April 2019, which held 59 cyclical good-periods and 59 cyclical bad-periods for stocks, totaling 118 six-month periods.

Applying the concept of Likelihood Ratios, one can calculate the probability, which turned out to be 65%, that the good 6-month periods from November to April will provide higher returns than the average-return of 5.20%. (The Null Hypothesis is the default position that there is no relationship between two measured phenomena, or no association among the two groups, and therefore both periods in our case should produce the same average-return of 5.20%.)

See also my DM, which is open for people to observe - but you would be foolish to subscribe at $1,000 per month.
https://www.portfolio123.com/app/r2g/summary?id=1531420


For the US, this too shall pass.

All of my timing indicators look fine.

Be forewarned, the European markets will be in a similar disruption in October when the ‘new’ Brexit date comes due.

Jim, my response was indeed in response to the tongue in cheek joke made…

I usually stay out of the fray, but since you asked. I traded S&P futures in the 90’s using neural networks. They were a thing back then. As I recall I had something like 25 inputs. Due to the lack of computer power in those days it would take hours to run. The signals were decent, but after a couple of years I moved on as the results weren’t much better then reading charts.

In the early years I was a chartist. This worked out great for me. However I ultimately moved on to quantitative analysis as it is far more objective. Turned out to be a good move.

You are correct in that my system is extremely complex and has pushed the capabilities of P123 to the limit over the years. I joined in 2007 to be able to download raw data on stocks. This was then served to Excel where the heavy lifting happened.

When P123 changed the download limits it pretty much killed my research. So I had to move my Excel work onto the P123 platform. This took years to do as I would get so far, then find out P123 didn’t have the capability I needed to get to the next step. I developed some workarounds to the download limits, but those took what was once a 30 minute process to run my macros into two days or more.

When P123 introduced variables, that changed everything and I was then able to essentially start writing the code that allowed me to convert from Excel to P123.

Overall it took about seven years to completely get my system running on here. I always ran into constantly pushing the limits of P123 and would have to wait for future developments/improvements.

It now works great, however still not exactly as my Excel work ran. The thing I’m waiting for that will complete the work is for P123 to introduce dynamic weights in the ranker.

I follow these threads with interest, sometimes cringing as I read how difficult some make their systems. My observation is that almost everyone on here is curve fitting, whether they know it or not. Thats just the nature of screen development, back testing and optimization.

My system is completely objective. I don’t get involved with it except for some due diligence when reviewing the stocks that get output from the run. It’s taken years and years to port my system onto here. We have a really good thing with the work that Marco and his team have done, and continue to do.

I’ve often contemplated making my work public for the benefit of all. Trouble is it took me from 1992 to 1997 to fully develop, and I’ve been successfully trading it now for 22 years, without changing it at all. So, it’s just not time yet to throw it out onto public domain without compensation.

Having said that I am open to dialogue about what I do if anyone is interested.

Now, back to work…

-Steve

Steve, how have your results been on a year by year basis of your model? What is your benchmark?

I am always interested in hearing about the results of long time users of tools like P123. thanks in advance.

Steve,

Thank you for your thoughtful reply. I thought about deleting my comments.

Looks like I skimmed and missed the tongue-and-cheek. I actually did look up the lol (for alternate uses) before I posted because I had trouble believing that you were making fun of Cary’s technique. You weren’t. You laughing WITH him and not At him.

I did not delete because I do not think I was too harsh on anyone. I have no complaints about "complex’ techniques as long as whoever is using them has taken whatever steps they think are necessary to avoid overfitting and they are not throwing stones at someone else for doing the same thing.

Plus, I hope to use my own neural net or other complex time-series ideas, sometime.

Thank you for sharing.

-Jim

My objective when I designed the system was to rival the top 5% of money managers in the country. Back then it meant having a 10 year annualized return north of 18%. I was successful at beating that, some quarters I would land in the top 2%.

My long term objective is simply to outperform the major indices by 5%, which I do regularly, and often by far more than that. My work did take a pretty big hit in 2008/2009, along with everyone else.

I know there are some on here that are putting up some pretty big numbers, but that seems to be with a very small number of holdings primarily in micro-caps. My portfolio invests only in listed stocks north of $10 per share. I hold 18 to 25 positions with typically 100% annual turnover. I’m into consistency, manageable volatility and compounding.

On here I use the Russell 3000 w/div as my benchmark. Thanks for asking!

Steve, thank you. Encouraging for me. I have had up and down performance over the years but am always trying to keep the faith. It is good to hear about other’s real world experience.
I invest in SP500, SP400 and some (liquid) PRussel 3000. Mostly large caps. 65 positions over 5 ports. 35%<TO<175%/yr. My goal is to beat SPY, over time. Not a high goal but it still can be tough in some years. All in an IRA for retirement so I am very conservative about it.

All this tells us is that it is an analysis of hindsight statistics.

Sevensisters:
“All this tells us is that it is an analysis of hindsight statistics.”

Unfortunately we don’t have anyting else. Future statistics are not available because the future is…the future!

Well, yeah. On both counts.

Problems that can be addressed (to some extent) with a hold-out set for cross-validation or hold-out-testing.

Far from perfect for a host of reasons. Maybe the method could/should be discarded in many situations, but it generally should be at least considered if there is enough data to create a training and test set that are large enough. The training and test sets are combined later so there really is nothing to lose.

I would be the first to start a list of reasons why this is not perfect. de Prado’s book (Advances in Financial Machine Learning) would provide a list of problems/best practices from someone that understands this better than I do.

In this case seasonality is so well-know, discussed and published that data-snooping may make a hold-out set useless here. You would be testing data for which you already know the result. So I am personally okay with the way Georg has done this. And it may be more honest here than presenting a test set pretending you do not already know what the data will show. So, it may only be useful for the development phase and not so useful for presentation: who knows if someone else might have peeked at the test data.

So, hmm. Good points and maybe I don’t have much of a solution (for this case).

-Jim

Jim,
thank you for your elaborate comments.

Werner

EWT looks for a big decline, going from 2900/3000 to 2100/2200