TECH TUESDAY: Tailored and Principled: Striking a Balance in AI Regulation

TECH TUESDAY is a weekly content series covering all aspects of capital markets technology. TECH TUESDAY is produced in collaboration with Nasdaq.

Since the emergence of ChatGPT last year, buzz around the power and potential of artificial intelligence (AI) tools has reached a fever pitch. After several months of experimentation, it has become strikingly clear that 2024 will be the year focused on the meaningful implementation and execution of AI across all industries. The winners and losers of the AI race will be decided in the coming year, as innovative companies look to integrate AI in a way that allows them to fully harness the technology’s capabilities. As adoption of AI becomes more widespread each day, questions are rightfully emerging around how governments evaluate the risks associated with its applications and how to move forward with regulation. Every time a novel technology appears on the horizon, there’s a chorus of voices echoing “there ought to be a law.” The instinct to regulate new and emerging technologies is often overpowering – and with AI there are very real and credible risks that need to be considered. However, there is a critical balance to strike. Early regulation can be uninformed, prone to favoritism, nationalistic, and inflexible, and may not provide the necessary support for technological development.

A look back at our history provides valuable lessons. Consider the airline industry – heavily regulated early on, resulting in heightened safety measures that instilled critical public confidence in a new technology that was to change the world. In this case, early regulation worked well to increase customer safety. However, this rigid regulatory framework metastasized into pricing, routes, and even scheduling, ultimately giving rise to monopolies, inflated fares, and exclusivity, taking decades to dismantle.

In contrast, legislators embraced a “wait and see” approach at the advent of the internet. Perhaps this was because physical safety was less of a concern at the onset and the technology was foreign to many lawmakers. This approach fostered a vibrant industry that allowed for new evolutions in technology to be embraced. However, on the other side of the coin, the argument could be made that some much-needed guardrails took too long to develop and implement.

These examples illustrate the importance of striking a balance. We need to ensure that our rules and regulations protect everyday citizens, but do not stifle the vast possibilities that AI offers. To do so, we believe there are three key principles that need to be top of mind as AI regulation is developed to protect the systems that power our economy and society.

Vibrant technologies usually thrive when legislation does not rush to regulate. Early regulation can sometimes inadvertently pick favorites, in extreme cases upholding monopolies that can take years to unfold and hampering the technological advancements that are driven by competition.

In many cases, those who act first get it wrong. There are existing pieces of regulation that can be leveraged to provide initial guardrails, while keeping an open mind to potential future use cases.

Less prescriptive regulation tends to yield better results. Regulation is often a burden on small-to-medium size businesses. More prescriptive regulation can freeze out smaller, innovative operators as well as disincentivize larger companies from reinvesting in technological advancements. Less prescriptive regulation also allows for more flexibility as technology advances and evolves over time.

It can be tempting to enact broad-based and overarching frameworks over concerns about safety. But it is important to recognize that technology is neither entirely good nor bad. Its impact depends on various factors and use cases. The way that AI is utilized to identify potential instances of fraud might need to be subject to different standards than the way AI is used to power autonomous vehicles. By developing narrowly tailored rules that focus on reducing harm and the tangible, negative impact of specific use cases, we can better protect consumers without hampering innovative use cases that can be a transformational force for good.

Industry involvement is critical and cross-border cooperation should be encouraged. Collaboration between public and private entities in policymaking is essential to ensuring regulation can keep up with technology advancement. Industry collaboration can also provide regulators better data to inform policy, helping tailor regulation to specific use cases, and enables regulators create a framework that encourages creativity and growth.

Further, we are already seeing governments around the world move to regulate AI, with varying levels of rigidity in proposed legislation. However, the impact of AI will not be limited to specific geographies. A fragmented patchwork of conflicting requirements will only increase the risks of over-regulation and growing compliance gaps. By harmonizing best practices for technology regulation across international rulemaking agencies, we can better balance adoption of AI with the safety and privacy of global citizens.

There are legitimate concerns around privacy, national security, intellectual property protection, and bias, but we must not be short sighted in our rulemaking efforts. These risks are preciously why we must take the right approach to responsibly regulating AI – creating frameworks that keep safety top of mind, but don’t overregulate to the point of choking innovation.

John Zecca is Global Chief Legal, Risk, & Regulatory Officer at Nasdaq.

Creating tomorrow’s markets today. Find out more about Nasdaq’s offerings to drive your business forward here.