Bias in AI: Scary Monster or Tamable Beast?

In this first blog in the Views on 2020 series from #Refinitiv, Debra Walton explores the factors contributing to bias in AI and analyzes its impact on the financial services industry. What solutions can be employed to ensure the quality of the data and subsequent outputs are free from such bias?


  1. As the world focuses on the drawbacks of AI in more mainstream society, its impact on the financial services industry could be just as serious.
  2. The impact of bias in AI can create long-term problems and has been seen in such events as the 2016 flash crash in sterling. However, Debra Walton believes that, on balance, the advantages of AI outweigh the disadvantages.
  3. Solutions to bias in AI can be achieved through creating transparent and understandable models; managing data quality; and monitoring outputs and course correcting.

Not since Frankenstein’s monster hopped off the scientist’s bench in Mary Wollstonecraft Shelley’s gothic novel have more people been more afraid and vocal on the confluence of man and machine.

In the financial community, leaving decisions to machines highlights the issue of bias in artificial intelligence in a sector that’s worked hard to regain trust since 2008.

In my speech to The World Financial Information Conference in October, I acknowledged that bias is indeed problematic, but on balance there is far more to be gained with advances in AI. As such, I see it as far less of “the scary monster” that some commentators suggest.

Challenges in AI

The challenges facing the financial industry today are the same the world over: How to take cost out of operations and be more productive; how to keep compliant with global regulation; and how to innovate fast enough to meet customers’ demands.

Data, artificial intelligence and machine learning are central to solving many of these challenges. However, humans design the algorithms and the mathematical models that create the problems in the first place.

As Cathy O’Neil, mathematician and author of Weapons of Math Destruction, said: “I know how models are built, because I build them myself, so I know that I’m embedding my values into every single algorithm and I am projecting my agenda onto those algorithms.”

While the media focuses on voice-activated technology failing to accept commands from non-white, Ivy League-educated men, or payment services being accused of giving a wife less credit than her husband despite her exemplary credit scores, little is spoken of the potential for crashes and failures in financial propositions such as trading, wealth, investing and risk — where robo-advising and automatic trading are now commonplace.

But an epic crash or flash crash can have impacts on society for years to come, from pension shortfalls to poor financing deals at a state level.

One such example of a flash crash was observed in October 2016 in the aftermath of UK’s decision to leave the EU, when, during Asian trading, the pound plunged by 6 percent.

The blame fell on computer algorithms. The Guardian reported: “Kathleen Brooks, the research director at the financial betting firm City Index, said: ‘Apparently, it was a rogue algorithm that triggered the sell-off… These days some algos trade on the back of news sites, and even what is trending on social media sites such as Twitter…’”

The trigger is thought to have been a report in the Financial Times quoting the French president, Francois Hollande, as saying that Britain would have to suffer for the Brexit vote in order to ensure EU unity.

One too many negative inputs for the algorithm, and the automated response was to ‘sell sterling’.

Bias in AI

Alongside human bias, two other potential bias ‘banana skins’ exist:

  • Data bias — This is the use of inaccurate data to support the development or training of a decision-making system.
  • Model bias — Which emerges due to the limitations of a system’s computational power, design or misuse.

Data scientists might also point to survival bias, selection bias and recency bias.

Having deployed AI for more than eight years in my own organization, we also know that poorly constructed training data that fails to include enough history can lead to misleading results. For example, only 10 years of financial data will fail to recognize a recession.

Solutions to bias in AI

All biases affect humans, and three things need to happen to prevent them:

  1. Create transparent and understandable models.
  2. ‘Explainability’ is key: Many systems built on neural networks (deep learning) are opaque. The financial community needs to be able to explain its models and how they reached their conclusions.
  3. Robustness: Models can be easily fooled and may result in unintended consequences, such as Waze’s algorithms accidentally congesting a wildfire escape route in California. Humans need to be testing what happens when an algorithm suggests a course of action.

Manage data quality

Data must be of sufficient quality, and accordingly, the following procedures should be followed:

  1. Ensure that only relevant data is used in the model – spurious data can lead to spurious results.

2. Establish a data review panel in every financial institution and bank.

3. Use reliable statistical methods to ensure correct data.

Monitor outputs and correct

To monitor and scrutinize the output provided by a model, three steps need to be taken:

  1. Test the model continually even after it is deployed.
  2. Demand repeatability: It’s sometimes difficult to repeat the output of a model, especially if input data has been adjusted
  3. Monitor fairness.

Increasingly, humans are finding it harder to identify when something unusual is happening in the market, and for them then to explain what was behind the event. AI checking up on AI is part of the solution.

The most typical event is that a trader, analyst, PM or banker finds an unexpected movement in the share price of a stock they trade, research, own or bank.

By definition, an unusual movement indicates that new information is being priced in at that point in time. Being able to identify and then explain this event to internal or external constituencies is critical.

Mosaic from Refinitiv

Refinitiv’s algorithmic mechanism, Mosaic, uses statistical analysis to identify such unusual movements, factoring-in aspects such as time of day, historical volatility, and the overall market.

Once an unusual movement is detected, the system automatically scans a wide array of data that is related to this stock (sources that could explain the movement) and analyzes it for potential explanatory information.

In fact, we find that about 70 percent of significant movements can be explained by a related article in the news. We use NLP, or Natural Language Processing, as well as sentiment and other machine learning to hone in on the right news item and screen out other data.

So we train our algos to look at events, trade flow, short interest, and other relevant data to see if there is a potential explanation for any movement. On some occasions, multiple sources substantiate and help to triangulate to the right narrative.

Our mechanism, Mosaic, then delivers this information in real time, or you can replay history to see how the event played out.

It’s AI fact-checking on AI.

Removing AI bias

In conclusion, defeating the monster of bias in AI and machine learning comes down to three things: creating transparent and understandable models; managing data quality; and monitoring outputs and course correcting.

Let’s be honest, we are all human and we all have our biases. However, with more fact-checking by machines and people, and a lot of collaboration across the global financial community, perhaps bias in AI could be ‘lost in darkness and distance’ never to be seen again.

Just like Frankenstein’s monster.

Artificial intelligence and machine learning trends are transforming financial services, as revealed in our 2019 survey of global business leaders and data scientists.