Word for Word

Headland's Andresen Talks About Keeping Algos from Going Wild

Given the Knight Capital Group debacle of Aug. 1 and the subsequent discussions in Washington and within the industry regarding safeguards, Traders Magazine spoke with Matt Andresen, co-chief executive officer of market maker Headlands Technologies, to learn the steps his firm takes to prevent algorithms from going wild.

 

TM: Do you have an overarching risk management philosophy at Headlands?

Andresen: The most important thing you have to have is “disciplined paranoia.” You must have respect for the problem. That something could go wrong. That you don’t know everything. You must question your assumptions. You have to triple-check. Test that assumption. It’s part of the culture.

 

TM: When it comes to your own orders, what do you focus on?

Andresen: Are my orders sent within my risk checks? Am I buying more than I can afford to buy? Am I selling more than I can afford to sell? Am I sending too many orders? Am I sending too large an order? You have to check all of this. And, even if I’m within those risk checks, am I doing it too often versus the historical?

 

TM: Is all this monitoring automated?

Andresen: Some of this must be automated. So when some of that stuff happens, the system just shuts itself off. Some of the things must be exception-monitored. That’s where the system throws off an error message with a sound like “HEY! PAY ATTENTION!” Also, you must have proactive monitoring where your traders, your ops people and your compliance people have access to the right information and can plow through that.

 

TM: What is exception reporting?

Andresen: It’s an automated report that is reviewed by human beings. That compares to an automated check that would say, “Hey, I just got a disconnect from the exchange, stop trading.” A machine would do that automatically.

 

TM: How much of the monitoring is automated and how much relies on humans checking?

Andresen: It’s a combination of automated checks, exception reporting and also human monitoring. I’d say that’s true for almost any firm.

 

TM: You used an airplane analogy.

Andresen: Right. So if my left engine is on fire, the plane knows to automatically shut that off. Compare that to the system saying, “Warning, warning, fire”-giving an exception warning to the pilot, who then must take action. And the third area is the pilot looking out the window at the wings on a regular basis. Those are the three different ways. With any complex system, you must have those things overlapping at all times. So, the computers are monitoring. The computers are warning the pilots. The pilots are not trusting the computer and doing their own checks.

 

(c) 2012 Traders Magazine and SourceMedia, Inc. All Rights Reserved.

http://www.tradersmagazine.com http://www.sourcemedia.com/