Index | Recent Threads | Who's Online | Search |
|
New Thread |
|
marco
![]() |
I wish it was more competitive. But when you need fundamentals + estimates + industry classification it's either reuters , s&p or fds. And if you need dead companies keeping everything aligned with prices is a nightmare if you use different vendors. Probably a good trade with flat stock price and earnings coming up. I should have bought their stock long ago for the same reason I bought Comcast. Can't get away from them. Portfolio123 Staff. |
||
|
judgetrade
![]() |
Would be interested! |
||
|
Jrinne
![]() |
For anyone wanting to explore this further, I just signed up for it and got my "$500 credit." It did not require a credit card. From time to time you will encounter Luddites, who are beyond redemption. --de Prado, Marcos López on the topic of machine learning for financial applications |
||
|
Jrinne
![]() |
"But judging from their demo they have reasonable defaults set for FactSet users. So, at the very least, we can use DataRobot to test a lot of these models and default values and just focus on the ones that work well for us. " Marco - I just want you to know that my tulip python library takes care of all of the (major) parameters for XGBoost. It is not a problem and saves a tremendous amount of evaluation time. I think that the big problem with the code that I gave out was that it was at too high a level and most people can't appreciate it until they discover how things work at a low level first. So I am going to have to give STRONG SUPPORT TO STEVE ON THIS. At least with the data I have so far. As you may recall I ran some data on JASP which gives me some control of the hyperparameters and that model was predictive of stock returns. The hyperparameters and the results can be found here: Boosting your returns So I ran the same data on DataRobot and got the image below: boosting was worthless with their hyperparameters. This is their implementation of boosting (XGBoost in particular). DataRobot runs a lot of models very quickly. It looks like they may optimize the hyperparameters for XGBoost to some extent. BUT not like Steve does in his Colab program or even what I did with JASP, it seems. I think DataRobot MAY not optimize the hyperparameters like you would want if you were going to put money into a system. May not spend the computer time fully optimizing all of the hyperparameter for the multitude of models that they run so quickly. And some human art can be involved in finding the best hyperparameters. And some of the ML/AI models cannot be expected to work with rank data (e.g., Ridge Regression). Ridge regression could work well with Z-score when P123 makes that available. But keep in mind ridge regression is about 3 lines of code in Python (once you have downloaded the libraries). 3 lines of code for ridge regression and a few additional lines of code in Scikit-Learn for the cross-validation. I am not claiming every newbie could do it without a little guidance from Steve (and other members), P123 or a combination of both I need more data on this. I am not done and I do not have a firm conclusion on any of this. This is just one piece of data for me. But score a point for JASP and some human intervention for now. And a point for what Steve is doing over at Colab. I will look at this with XGBoost. Take a good long look with better data and a hold-out sample that will not have any potential for data-leakage whatsoever. Take my time getting it right. And also continue to look at this over at DataRobot (as long as my $500 credit holds out). No conclusions for now but I thought I would share some of what I have at this point. Jim ![]() From time to time you will encounter Luddites, who are beyond redemption. --de Prado, Marcos López on the topic of machine learning for financial applications |
||
Edit 7 times,
last edit by
Jrinne
at Jan 13, 2021 6:13:58 AM
|
Jrinne
![]() |
I have had a chance to play with DataRobot a little. I just wanted to say I have gotten some results that are better than above although not necessarily anything I would invest in or recommend for investing based on the the results below. In fact, the results below as well as what I get in Python (with the same data) is making me inclined to not invest in the strategy. Or if I do to use P123 classic (perhaps with a bit of factor analysis) for this data and these factors. This is a holdout test set (not validation data). There is some capacity to change the hyperparameters in the program which I had not found when I wrote the above. I like the program at this point. I am suspicious about the pricing as I cannot find the pricing anywhere and they have not contacted me yet. Although the fact that I have not burned through my $500 credit yet is a plus I guess. Marco, I do not know if you are still looking at DataRobot but I did not want to leave an unfair opinion as my last post on this. Still not an expert on DataRobot by any means. Jim ![]() From time to time you will encounter Luddites, who are beyond redemption. --de Prado, Marcos López on the topic of machine learning for financial applications |
||
Edit 7 times,
last edit by
Jrinne
at Jan 18, 2021 4:27:03 AM
|
|