As generative AI (GenAI) begins reshaping the financial services landscape, firms are both excited by its potential and cautious about its risks. The technology promises gains in productivity, personalization, and client service, but its complexity presents challenges that many existing governance and risk management systems are not yet prepared to handle.

“Governance of generative AI is a fast-evolving topic, mirroring the fast pace of technological innovation itself,” said Sebastian Gehrmann, Head of Responsible AI at Bloomberg.
“Today, many financial institutions are, understandably, taking a measured approach in the adoption and usage of GenAI solutions because the risks are still being fully explored and existing governance constructs are often not fit for purpose.”
Unlike traditional AI models, which are largely predictive and bounded by structured data, GenAI systems generate outputs that are novel and dynamic—ranging from synthetic text and code to new financial insights. These capabilities introduce a distinct set of governance challenges.
“Designing risk management and governance for generative AI is inherently challenging, primarily because of the technical and subject matter expertise required to be effective,” Gehrmann explained.
Bloomberg’s research advocates for evaluating generative AI systems holistically, focusing on the inputs, outputs, and use case. “While implementation and modeling approaches will vary, it is important to have established engineering practices, guardrails, and governance processes in place to ensure solutions are robust, resilient, reliable, and secure.”
As firms push toward innovation, the pressure to adopt GenAI quickly is high, but so is the need to maintain transparency, accountability, and auditability—especially in a highly regulated industry. “As generative AI and large language models (LLMs) start to be used in solutions focused on addressing pressing business problems, the industry should consider the viewpoints of stakeholders with different backgrounds,” Gehrmann said.
These include subject matter experts, legal and compliance professionals, client representatives, engineers, and data scientists. This kind of collaboration ensures GenAI applications are not only technically effective but also legally and ethically sound.
Gehrmann emphasized that responsible AI should be seen as an enabler, not a barrier, to innovation. “At Bloomberg, we are used to building systems with auditing, transparency, and accountability in mind,” he said. “Using generative AI does not change these practices.” Tools such as automated data lineage, version-controlled model registries, and continuous deployment pipelines with embedded validation are critical to maintaining trust in AI-powered systems. A notable example from Bloomberg’s Responsible AI work is their practice of transparent attribution, which allows clients to trace generated responses back to original source documents. “This is a clear way to create accountability without stifling progress,” he added.
In high-stakes environments such as finance, the call for standardization is growing. Gehrmann sees value in common principles but cautions against inflexible regulation. “Industry-wide standards and best practices for guardrails and model evaluations are critical to the responsible adoption of GenAI systems,” he said. “But guardrails, governance processes, and risk management frameworks need to be tailored to each organization.” Bloomberg’s research illustrates how different use cases—such as financial advice or client communications—may require different levels of caution or transparency. The sociotechnical context in which GenAI is used matters greatly.
He further noted that regulators should look to the adaptability of past principles-based regulation. “Fraud is illegal; it does not somehow become more illegal because AI was involved,” Gehrmann pointed out. “Regulators would be well served to look at existing rules and regulations and continue to evaluate whether emerging technologies present novel risks that fall outside the scope of the elasticity of existing law.”

For firms just beginning their GenAI journey, Shefaet Rahman, Head of AI Services in Bloomberg’s AI Engineering group, offered practical guidance. “The first step should be to define two or three clearly defined, high-impact use cases where you can prove the technology’s value quickly,” he said. Measurable outcomes—like reducing manual review time or improving response coverage—help build internal momentum and credibility.
Rahman stressed the importance of involving cross-functional teams early. “Co-develop with subject matter experts from Day 1,” he advised. “Embed engineers, UX researchers, product managers, and other stakeholders into decisions about data, models, and output evaluation.” He encourages treating GenAI projects like agile software development—working in short sprints, gathering stakeholder feedback, and adjusting course as needed.
Launching with a human-in-the-loop approach is key to ensuring accuracy and trust. Over time, as confidence grows, firms can increase model autonomy. “As the project matures, pair this with strong AI operations practices—like continuous evaluation, monitoring, and error analysis,” Rahman added. Automating the model pipeline—from data validation to retraining and deployment—helps ensure that updates are safe, reliable, and governed. “LLMs-as-judges can be great tools for this,” he noted, referring to the use of language models to assess AI outputs.
Ultimately, responsible adoption of GenAI is a balancing act between technological innovation and governance maturity. “With the right structures in place, generative AI can help financial institutions solve real problems faster, more efficiently, and more transparently,” Gehrmann said. “The key is to start small, scale thoughtfully, and stay grounded in the regulatory and human realities that define financial services,” he concluded.

