Interesting question. Here’s me $0.02.
I don’t think there’s really any such thing as artificial intelligence. That which is referred to as AI is, in my opinion, just plain better programming wrapped up in a jazzier marketing handle.
Ultimately, from the very first computer ever invented and the very first program to the most powerful hardware and software used today, there is one thing that has never changed: A computer does only one thing, it recognizes that a current is on or off. Everything else involves human-generated algorithms that decide how to translate things human want to do or know into on-off patterns, and how these dumb limited machines respond to the ever-changing on-off patterns given to it by humans. So actually, computers have not gained one iota of intelligence since day one. They do the exact same singular thing; deal with circuit on-off patterns.
What has changed, dramatically so, incredibly dramatically so, is the ability of humans to do more and more with on-off. We create more and more sophisticated types of instructions we can translate to on-off and we build machines that can process the patterns in better and faster ways. What commentators today refer to as AI is, ultimately, better HI (human intelligence) used to create better software and better hardware.
Self-driving cars, for example, are not at all self-driving. They depend on better human inputs (sensors of all kinds measuring all sorts of external things) responding in pre-programmed ways with success (getting from point A to point B) or failure (crashing, etc.) determined by the adequacy of the human efforts to figure out what sorts of inputs should be measured and what sorts of responses the dumb machine should make. If humans become good enough at these tasks, seemingly self-driving cars will become ubiquitous. If humans are unable to keep up with all the things they need to consider, then this will become a coo, but ultimately doomed science-fair project. It all depends on the humans. (By the way, self-flying planes, “fly by wire” is old hat and, I understand, used regularly even in everyday commercial flight. But given the “cost” of an algorithm that missed something that should have been taken into account, we’re still not willing to let it do everything and even when it’s in use, we still want humans in the cockpit who can over-ride; I suspect cars will evolve the same way – more and better incarnations of what we now think of as “cruise control”) but maintain the ultimate ability of the human to take over.
What does it mean for us? Probably more progress along the lines we’ve already been traversing. If Graham and Dodd could be resuscitated and shown the existing p123, they’d undoubtedly be dazzled by the AI already implemented. And there’s room for more. For example, we all know the limitations on testing insofar as it can tell the future. But I already roughed out ideas on how a future-oriented “backtester” could work, and I’m not even a computer guy. Ditto market timing. Its limitations are known. And so, too, are the things we could work on to address them by adding more intelligence. It’s just a matter of the skill set to get from back-of-the-envelope to actual use, which are considerable and beyond what I can deliver, but not necessarily beyond the skills of what others in the future will likely be able to deliver as each generation builds on the efforts of those that went before.
Can things like value or momentum ever be replaced? Maybe. Ultimately, the price of a stock always was and always will be the match between the prices at which willing buyers and willing sellers will transact. The role of financial theory (from which value, momentum, etc. spring) is to help us try to anticipate the behavior of willing buyers and willing sellers. The more effective we get in this sort of thing, the better our investment results will be and if it turns out that future buyers and sellers make choices that bear no relationship to any currently-known financial theory, then so be it (and the field of behavioral finance, including the work of Robert Shiller I drew upon for what I presented here as noise trading is already working along these lines).
Ultimately, however, there is no AI. It still comes down to better and better and better HI in control of still-dumb machines that still know nothing more than on-off.