FIX Implores Open Innovation in Trading, But Hedges on AI

The topics du jour were certainly different at this years FIX Trading Community Americas Trading Briefing. Yet two topics – optimized trade execution and streamlined post-trade processing – proved that the more things change, the more they stay the same.

Last springs event was buoyed by the ebullience of the Bitcoin bubble, and the rising institutionalization of cryptocurrencies. How long ago it now feels as a result. This years edition-once again hosted by Goldman Sachs in New York-instead touched upon developments in trading technology that have proven to hold more staying power. These included the growing influence of the algo wheel concept, and the steady shift of principal-based trading into electronic execution. Regulatory clarity was also debated, with the long-awaited arrival of the Consolidated Audit Trail (CAT) project, mercifully now FINRAs operational responsibility, and a newly-amended Rule 606, which will require brokers expanded disclosure of order routing practices, in the offing.

But above all, the narrative threading the 2019 meeting was summed up by one moderators comment: that huge areas remain where investment banks and their buy-side clients must mutually converge on strategies and practical solutions on their own, whether bolstering efforts at pre-trade pricing and transparency, optimizing execution styles and outcomes around client needs, or speeding up post-trade processing.

To this extent, two points carried the day: first, that stronger datasets (and data science teams) are required to meet these demands. And perhaps more surprisingly, and second, is that open-source technology development is currently viewed as more crucial than artificial intelligence (AI) applications to getting there.

Execution: Benign Flow First, Then Staying Dynamic

Suitably enough, the day began with a pertinent datapoint on algo wheel deployment. Despite the hype, recent consultancy research found that relatively few investment managers are using these tools today, though those that do send significant flow through them. Panelists discussed this finding, fleshing it out into a more conceptual discussion and parsing, as one speaker put it, the question of why do we want to [use an algo wheel] to begin with? Two angles led that discussion: one assessing the buy-sides need to quantify order characteristics, and the other involving their banking partners ability to deliver data services and algorithmic execution that are increasingly high-touch.

First, a senior buy-side quant trader boiled the matter down to the more systematic measurement of performance and execution required in the increasingly complicated context of various rules-based processes, e.g. around research and commissions, governing investment managers trading operations today. This, he said, amounts to a search for better execution-quality benchmarks, and many firms still struggle to create the right foundation for their decision-making.

Really you want to have a dataset that is normalized and consistent across brokers and strategies, so isolating the benign flow first, he explained. What happens when you cross brokers, strategies, and urgencies, you want uniform volumes, sectors and market conditions to analyze within that flow. The beauty of it comes in later, evaluating with different quantitative techniques, but it starts with that benign flow. Otherwise, shoving every kind of order into the wheel just wastes time.

One particular area where data lacks, the panel agreed, is in divining the reasons why an order went to one direction, was split up, or diverted to a trading desk altogether. Said one sell-side speaker, even internally and for lower-flow clients, we run internal experiments with a whole array of metrics, comparison against benchmarks or basket of stocks, and regression analysis to normalize as much as we can, and see if there is statistical significance or not. Its about tying the wheel to your broader process, he explained, and thoroughly challenging assumptions.

Today, you really have to be a market microstructure expert to figure out whether a routing decision made sense, in its broader context, and that can always be narrowed down to mechanical reasons. But theres a risk if you approach this data analysis too mechanically, you might be drawing bad conclusions. For example, a trading venue leaks information, and everyone flocks there-but maybe it doesnt always do that. At one point in time that explanation makes sense; it doesnt always. So how do you analyze those decisions in context, from router down to micro-placement and scheduling child orders? Tweaking the dials, and the alpha in the layers underneath, you really need to almost calibrate from scratch, and that may just become unfeasible [for an algo wheel]. To interpret these things, you can draw dangerous conclusions at times. Too much static schema, when youre facing a predatory counterparty, and youre dead in the water.

So, knowing when to spin the wheel and when to go elsewhere starts with better data and transparency into order routing. And FIX, for its part, has seen a renewed push to add additional messaging-instruction digits that can represent components of intention. As one commenter said, we have to come together at the grassroots to push this.

Post-Trade: More Openness, Less P(AI)n?

While the complexities may be less profound in post-trade processing, the conference later heard the reasons why these, too, need further momentum and bolstered technology spend. To prove this point, the session on the subject likely featured the largest collection of speakers on a single FIX panel ever-eight-but for good reason: post-trade remains clunky and technologically irrational. As one panelist noted, tier-one banks are still employing staff to simply collate faxed confirmation documents for bilateral repurchase (repo) and securities lending agreements. But even liquid and faster moving markets, like foreign exchange (FX) or equity swaps, are only now coming to FIX-driven automation, as well.

From client service to revenue creation to regulatory change, we have to reshape these challenges, suggested one banks managing director. With a utility approach and new technology, we wouldnt have a problem doing that, but often the tech is from 20 years ago. The problems are as old as that, but we havent really tackled them completely. Another added that this has led to a proliferation of unintegrated micro-services and forced marriages that typically still end up with a lot of conversations over the phone. Something has to give.

Lots of emerging tech-distributed ledgers (DLT), robotics processing automation (RPA), AI, and open-source-have dabbled in the post-trade space as a result. But the audience, when polled, had something of an interesting response to which among these could be most useful: open-source and DLT came top at around 30 percent each, while AI and RPA trailed well behind, only at around 10 percent. That hedge was repeated in earlier sessions as well, where some speakers questioned the applicability of AI methodologies to many of the operational challenges that face financial services-with one saying, the marketing has clearly lapped the reality with that to date.

But those on this final panel said it was more about cross-pollinating these strategies, and innovating new access points for them, rather than choosing among them. The larger the developer community, the faster adoption takes place, argued one panelist about open source and DLT. That sentiment was echoed by a separate panelist for the potential co-development of blockchain with FIX underlying it.

In the near term, he argued, we look at FIX from a front-end point of view, all trades get done in FIX messaging and we break that protocol up and the message itself as it flows on to the back end. So why cant we force FIX Protocol further into the middle and back-office, have one message that persists all the way through the organization? You just add pertinent information to it, rather than break it up. That sounds a lot like what blockchain is designed to do but this is already here in front of us, just asking to extend it.

Slow Burn

Conversely, the caution shown towards AI today may simply be a reflection of the industry still slowly discovering where to apply it effectively-again, constructing space for convergence. We connect to a lot of AI firms and today, you need to know what youre looking for, explained one speaker. Its grown into a massive area and you need a platform where your data can ping those AI providers, and see what they can do. The idea is: you dont need to bring a team of data scientists in to each bespoke firm or opportunity; as a result, you can identify use cases faster.

Like Bitcoin last year, perhaps the conferences 2020 iteration will capture a very different opinion on the subject 12 months from now.