The AI Revolution Is Already Here

“A rat in a maze is free to go anywhere, as long as it stays inside the maze,” – Margaret Atwood, The Handmaid’s Tale.

Angelo Calvello

I’ve grown weary of claims that artificial intelligence (AI) will revolutionize the investment industry and allow investment managers to provide clients with better investment outcomes.  

While many managers are touting their AI bona fides, some are simply rebranding traditional methods such as linear regression as “AI” while others use generative AI (like Chat GPT), machine learning, and natural language processing to enhance existing human-based investment processes.  

While such augmentation could incrementally improve performance, it reduces AI’s role in the investment process to that of a handmaiden to human intelligence.  

A handmaiden never led a revolution.  

Fundamentally, this handmaiden delegation rests on the deeply entrenched view that investing will always be a human activity. For example: “As investment management is a business driven by and for humans, AI will never fully supplant traders, portfolio managers, technologists, and other mission-critical professionals.” 

This worldview disqualifies the investment use of truly revolutionary AI types based on deep neural networks like deep learning (DL) and deep reinforcement learning (DRL).   

Rather than using human-contrived inputs (e.g., predefined factors), these systems simply start with a curated dataset and, through millions of iterations, learn to make decisions autonomously. If properly designed and trained, the accuracy of these self-learning systems improves over time. These systems, freed of human constraints, have achieved superhuman results in healthcare, engineering,  robotics, and autonomous driving.  

One AI researcher describes these systems as “Thor’s golden hammer. Where it applies, it’s just so much more effective. In some cases, certain applications can turn up the dial multiple notches to get superhuman performance.” 

The archetype of such superhuman performance is DeepMind’s AlphaGo (and later AlphaGo Zero), a deep learning-based self-taught system designed to play the incredibly complex board game, Go. 

AI researchers long believed it would take decades for AI to beat a human expert player at this game, but in 2016, AlphaGo beat world champion Go player Lee Sedol in a five-game match. 

In the match’s second game, AlphaGo made a move (now famously categorized as MOV 37) so unusual that it completely unsettled Sedol, causing him to leave the room. At the time, observers thought AlphaGo had made a mistake, but MOV 37 turned out to be brilliant and was the pivotal decision that eventually caused AlphaGo to win the game. Somehow, AlphaGo knew something about the game that humans did not, and its intuition was both different and better than human knowledge. One Go world champion commented that MOV 37 was “not a human move.”  

Defenders of the investing status quo concede that systems based on deep neural networks can achieve superhuman results in some areas, but because of the inherent complexity of financial markets, DL and RL cannot be used to make investment decisions. Yet, instead of supporting their disqualification with empirical evidence, these critics offer well-worn tropes, the most common being that all investment decisions must be explainable. DL and RL are black boxes whose decisions cannot be mapped back to a specific input, and they must, therefore, be rejected. 

Using explainability as a necessary criterion demonstrates a fundamental lack of understanding of advanced machine learning. Such unexplainability is an endogenous feature of deep neural networks. “As soon as you have a complicated enough machine, it becomes almost impossible to completely explain what it does.,” says Yoshua Bengio, a pioneer of deep learning research. 

The academic literature indicates that while it is possible to provide broad descriptions of how such advanced AI system works, contemporary techniques used to explain individual decisions are “unreliable or, in some instances, only offer superficial levels of explanation” and are “rarely informative with respect to individual decisions.” These techniques yield explanations that are, at best, unreliable and, at worst, wrong and harmful.  

This demand for explainability also falsely holds these AI systems to a higher standard than human-based investment processes; investment decisions made by human-based investment processes–systematic or discretionary– cannot be fully explained either. Conveniently overlooked by these defenders is that their paradigm of AI, Chat GPT, is also a black box.  

By choosing explanations over accuracy and predictive power, we a priori exclude autonomous, self-learning systems like DL and RL and limit a manager’s toolkit to human-based investing methods and mainstream forms of AI that augment human processes, effectively dooming clients to a cycle of underperformance.  

Angelo Calvello, PhD, is the co-founder of Rosetta Analytics, an investment firm that uses deep learning and deep reinforcement learning to build and manage investment strategies.