Travis Schwab is CEO, Eventus.
What were the key theme(s) for your business in 2025?

In 2025, our business was shaped by meaningful shifts in how markets operate and how firms supervise them. The rise of fractional trading continued to test traditional surveillance models with new levels of precision and internalized order flow. Growing interest in 24/5 equities trading forced firms to think differently about liquidity, thresholds and risk after hours. Prediction markets gained real traction, introducing novel questions around event-based trading, information timing and outcome integrity. And across all of this, firms accelerated their adoption of AI – not merely as an experiment, but as a practical way to handle complexity, improve investigations and strengthen governance. Taken as a whole, these developments signaled a market environment that is testing the limits of legacy surveillance approaches – and a growing need for frameworks that are adaptable and designed to withstand the winds of change.
What was the highlight of 2025?
For us, the highlight of 2025 was the launch of Frank AI, a deterministic, conversational interface available via our Validus platform. We built Frank in response to a longstanding constraint in trade surveillance: the gap between the volume and complexity of data firms collect and the practical ability of analysts to interrogate it efficiently. As markets have become more fragmented, more cross-asset and more continuous, investigations increasingly depend on stitching together activity across venues, products and timeframes. Frank was introduced to reduce friction in that process by enabling users to ask direct questions about their surveillance data in plain English, without relying on manual analysis or extended back-and-forth with technical teams.
One of Frank’s unique qualities is its deterministic nature. Rather than generating narrative answers or probabilistic interpretations, Frank translates questions into reproducible queries that run against live data inside the Validus platform. The logic is visible, the results can be recreated exactly and data never leaves the client environment. That design reflects the realities of regulated surveillance, where auditability, explainability and control matter as much as speed. In that sense, Frank was less about introducing automation and more about improving access to information while preserving existing governance standards.
Frank has had a significant impact on our clients’ day-to-day investigative work. Analysts can now move more quickly from an alert to context – reconstructing timelines, comparing behavior across asset classes or validating whether activity was truly anomalous – so they can focus on surveillance tasks that can only be solved by human judgment.
What are your expectations for 2026?
In 2026, prediction markets will move from emerging category to a universally recognized priority. While retail is driving much of the momentum today, these markets will increasingly enter the mainstream of institutional finance. As volumes grow and regulated venues proliferate, firms will confront challenges unique to event-based trading – including defining insider activity, validating oracles, integrating diverse data sources and identifying coordinated behavior across platforms. Surveillance models will need to adapt quickly, and operators that invest early in purpose-built frameworks will be best positioned to shape and scale this new asset class.
At the same time, 2026 will likely see broader and more practical adoption of AI across market infrastructure, particularly in support functions like surveillance, risk management and operations. Rather than fully autonomous systems, firms are increasingly experimenting with AI as an enabling layer – from agent-based tools that assist with investigation, calibration and quality assurance, to early applications that help manage the complexity introduced by tokenization and new market structures. As these experiments expand, the focus will remain on explainability, data governance and human oversight, ensuring AI is used to extend institutional capabilities without compromising control or accountability.

