As competition intensifies and market dynamics grow more complex, exchanges are embracing artificial intelligence not just to keep pace, but to redefine their role in the financial ecosystem. From streamlining surveillance and trade monitoring to enhancing liquidity and market prediction, AI is playing a pivotal role in transforming the way exchanges function.
According to David Easthope, Head of Fintech Research on the Market Structure and Technology team at Crisil Coalition Greenwich, AI/ML is in the top 5 for value drivers for Financial Market Infrastructures (FMIs, which include exchanges) and banks moving on premise applications in the cloud.

Exchanges and FMIs like the availability of cloud resources (just as much or even more than banks do), knowing that AWS or Google Cloud are going to be able to respond 24/7 as well as deliver speed to market for new applications and services, he said.
“We see exchanges using AI / ML for both internal workloads as well as externally focused services (market data). First and foremost, we see exchanges adopting AI for market data services,” he noted.

“When we looked at this a few years ago, we saw that 50% of exchanges and trading systems were offering data services powered by AI/ML. That is obviously now higher than that,” he told Traders Magazine.
According to Easthope, the next step on the exchange/FMI roadmap was the intention to offer AI-powered trade execution and trading analytics services: 42% intended to offer AI-powered trade execution and trading analytics services in the 2022-2023 time frame. “We have, however, seen less uptake on the trading / execution side so far,” he said.
“Now, we are also seeing exchanges using AI / ML in the cloud today with 28% of new AI/ML tooling and infrastructure investments focusing on faster analytics and risk reviews and 27% on data quality maintenance,” he continued.
“So, better speed and better data quality for analytics and risk reviews by exchanges,” he said.
Where exchanges are seeing AI results
To understand how these developments are playing out in practice, Traders Magazine spoke with experts at Nasdaq and Cboe Global Markets about how they are adopting and operationalizing AI.
For example, Nasdaq is integrating AI into our technology infrastructure with a long-term approach that prioritizes proper governance, security, and oversight, according to Edward Probst, Senior Vice President and Head of Regulatory Technology at Nasdaq.
“Over the past decade, we intentionally invested in data quality and cloud capabilities to unlock AI’s potential for future applications,” he said.
For example, Probst said, their surveillance business uses AI-powered systems and has reduced the investigation time for suspected market manipulation and insider dealing cases, while improving overall investigation outcomes.
“For regulatory reporting, we have leveraged AI to enable us to track and update regulatory requirements across the globe,” he said.
Probst argued that a primary challenge of today’s regulatory environment reflects its complexity: ensuring AI explainability meets supervisory expectations, maintaining audit trails across globally divergent regulatory frameworks, and addressing regulatory hesitations around maturity, reliability, and security.
“We’re continuing to address these through rigorous governance committees, extensive model validation frameworks, and ongoing engagement with regulatory authorities to ensure our AI implementations support the integrity and resilience of the financial system,” he said.
Probst described Nasdaq’s approach to AI adoption as one that balances innovation with strict oversight. “When we think about regulation, we use the term smart regulation,” he said, referring to the kind of regulatory approach that supports innovation without compromising the integrity of the financial system.

According to Probst, Nasdaq has built a comprehensive governance framework to guide the ethical and responsible use of AI across its products and operations. This includes a centralized governance structure, detailed internal policies, and a multi-disciplinary model that brings together legal, risk, technology, and information security teams. He noted that Nasdaq’s practices align with the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (NIST RMF), which is embedded through preventative and detective controls applied organization-wide.
Probst emphasized that AI outputs must be explainable and well-documented to ensure transparency in decision-making. Data used in AI systems is tightly governed, he said, with usage restricted to intended purposes and always in compliance with relevant regulations. “Security is embedded as a foundational element in all AI system designs,” he added, explaining that Nasdaq applies a risk-based approach to independently review AI projects. These reviews, conducted by diverse, cross-functional teams, are designed to ensure safe, responsible deployment and ongoing alignment with regulatory and internal standards.
Probst explained that fostering a culture of AI fluency is a strategic priority at Nasdaq. “We’ve made AI education and governance central to our organizational culture,” he said, underscoring the belief that modernization isn’t just about upgrading technology—it also requires empowering people. To that end, Nasdaq has launched internal platforms that support high-code, low-code, and no-code development, allowing employees across functions to experiment with and integrate AI into their day-to-day work. This democratization of AI tools, he noted, ensures that innovation isn’t siloed within technical teams but accessible to the broader organization.
Nasdaq’s internal AI efforts are also guided by a clear governance model. Probst described how they categorize projects into two streams: AI On-the-Business, which targets operational efficiencies, and AI In-the-Product, which enhances customer-facing compliance and surveillance solutions. “Each category is managed with the same level of oversight and accountability,” he said, adding that this framework reflects a shift in mindset—from asking if or how to adopt AI, to what solutions should be built and when to deploy them.
When it comes to measurable outcomes, Probst pointed to Nasdaq’s AI-driven surveillance tools as a standout success. These systems have notably reduced false positives, allowing compliance teams to concentrate on higher-risk behaviors. “Our AI tools automate the collection and summarization of unstructured data, significantly cutting down the time analysts spend on investigations,” he said.
Looking ahead, Probst said Nasdaq is “focused on implementing advanced capabilities within our suite of mission-critical platforms,” pointing to areas such as trading, post-trade, surveillance, risk management, fraud detection, and regulatory reporting. He described AI as having “truly transformative potential” and emphasized that its impact is expected to extend across the full spectrum of the financial infrastructure.
Organic adoption and cultural shift
Another prominent player – Cboe Global Markets didn’t adopt artificial intelligence overnight. The shift was gradual — driven not by executive orders, but by curiosity and experimentation, according to Hunter Treschl, head of the firm’s AI Center of Excellence.
“Initial efforts focused on small proof-of-concept projects — early-stage tools built to test what was possible. But as usage and interest grew internally, so did the need for a more structured and scalable approach,” he said.
By mid-2024, Cboe formally launched its internal AI Center of Excellence — not just as a hub for development, but as a company-wide resource aimed at making AI both accessible and usable.“It’s not just about building the tech. It’s also about teaching people how to use it,” Treschl said.
That dual focus — on infrastructure and education — has shaped how Cboe deploys AI across the organization. Rather than pushing adoption through top-down directives, the company relies heavily on what Treschl calls “organic adoption” at the team level. “The most effective thing we’ve seen is when one person gets excited about AI and brings it to their peers,” he said. “That becomes a catalyst. They show others how to use it, and it spreads from there.”

To support that model, Cboe has built out programs like AI Champions, which identifies and empowers employees from across departments to serve as internal advocates. These champions receive early access to tools and hands-on support from Treschl’s team, allowing them to tailor AI solutions to the specific needs of their teams. Complementing that effort is the AI Ideation Olympics, a cross-company hackathon where employees pitch and prototype use cases. The winning solutions don’t just get applause — they get implemented. “We went out and built last year’s winning idea,” Treschl said. “It’s now powering a suite of AI agents.”
Adoption patterns vary by function, but early traction has come from practical, time-saving use cases — summarizing lengthy regulatory documents, searching across proprietary data, and automating repetitive tasks. “That’s what I think of as Level One AI adoption,” said Treschl. “You’re using it to retrieve or condense information.” But now, he added, some teams are progressing to what he calls Level Two: using AI not just to assist, but to execute full tasks end to end — from sourcing insights to drafting finished reports.
Security and accountability remain core to Cboe’s approach. While many firms use external AI tools, Cboe built its own internal assistant, integrated with company data and governed by strict privacy standards. “We direct people away from public tools,” Treschl said. “We want them using our internal models, where we control the data and the context.” Still, Treschl is quick to point out that AI doesn’t remove responsibility. “If you use the tool, you’re still accountable for the output,” he said. “That’s been a cornerstone for us from the beginning.”
As the technology matures, the goal is clear: broader use, deeper integration, and smarter applications, he said. “We want more people using more AI where it makes sense,” Treschl said. “This isn’t about checking a box. It’s about giving people tools that actually change how they work.”
Insights on governance and data protection
As AI adoption deepens, questions of governance, accountability, and data protection are becoming central to how exchanges approach implementation.Easthopesees AI and machine learning as integral to how exchanges will evolve in the coming years. “We see AI/ML as part of the toolkit to build new, innovative products and services, including data services,” he said, pointing to partnerships like Google Cloud with CME and Nasdaq with AWS.
In these collaborations, cloud providers not only offer infrastructure but also bundle in AI and ML capabilities, enabling exchanges to experiment with and deploy advanced technologies more efficiently, he said.
When it comes to governance, Easthope noted that data protection remains a critical consideration—particularly in the context of public cloud adoption—but perhaps not to the extent some might expect. “A number of exchanges describe the cloud providers primarily as a toolkit, with institutions fully accepting that data protection is their job, not the cloud provider’s,” he explained. While issues of risk and compliance are certainly on the radar, he observed that they have not significantly constrained AI/ML strategies to date—largely because many of these efforts are still in early stages at financial market infrastructures.

