Crash-Proofing the Markets

Many industry execs say risk controls developed by the futures exchanges to protect against trading glitches shouldn't be formalized into regulations. Here's why

At a meeting of the Commodity Futures Trading Commission’s Technical Advisory Committee in September, Commissioner Scott O’Malia posed a question: Should the risk controls developed by the leading futures exchanges be “federalized” into regulations? Industry veteran Cliff Lewis spoke for many, if not most, market participants when he responded, “Don’t do it!”

O’Malia’s question and Lewis’ answer were part of a larger discussion on the CFTC’s “Concept Release on Risk Controls and System Safeguards for Automated Trading Environments.” The concept release is the commission’s proposal to deal with what seems like a deluge of system-generated market interruptions that began with the flash crash of May 2010. Since then, there have been at least 25 major market disruptions, including algos run amok, IPO crashes and other trade-halting exchange problems.

While some point to high-frequency trading, exchange technology or infrastructure imposed by rules like Reg NMS as the culprits, it’s impossible to single out one specific cause.

Like the Security and Exchange Commission’s proposed Reg SCI, the CFTC concept release focuses on preventing systemic failures, as opposed to implementing new regulations like those around the globe that look to curtail specific trading behaviors. The recent announcement by European Union policy makers to put a lower limit on tick sizes specifically targeting HFT is one example. This goes beyond previous rules that dealt with controlling the effects of wild price swings through price bands and circuit breakers.

Technology has certainly enabled some behaviors that would previously have been frowned upon, to say the least. Placing orders with no intention of execution comes to mind. Imagine if a human trader did that in the trading pit.

That said, technology has mostly been an amplifier, or accelerator, for behaviors that have been going on for decades. Anyone trading in the pit always had an information advantage over the desk traders who had an advantage over the customer. At least with co-location and direct market access, the buyside can compete when it comes to receiving and reacting to market data.

Take the “hash crash” of last summer-the result of a bogus report of explosions at the White House via the hacked AP Twitter account-which was no different from any other false rumor that caused the market to move. The hash crash was a hybrid disruption where the humans jumped in on the false news and the algos followed. It could be argued that thanks to the algos, the market was able to recover so quickly.

Speaking of Twitter, the New York Stock Exchange’s marketing of its pre-IPO system testing gets to the heart of how to prevent these system failures. More than 20 years ago, when I was a code jockey on a quant-driven prop desk, a trader would hand me some new vol surface model to code up right then and there. After comparing the model’s numbers to his spreadsheet, he would bless my code and it became production quality.

A head of trading systems called this “the man in a can” approach to software development because there were no business analysts or quality assurance people involved. Back then, it was possible to get away with this because there was a trader who knew what the numbers should look like, and it was he who was placing orders based on those numbers over the phone. He was taking the risk, and it would be his job if he got it wrong.

Today’s systems now require the same discipline in development and testing that any other complex engineering project would demand. That NYSE felt the need to advertise its testing before the Twitter IPO, along with Nasdaq and Thesys Technologies’ new Algo Test Facility, show that the industry is starting to take testing seriously.

But even the best quality assurance practices won’t guarantee a fail-safe system. At last year’s SEC roundtable on technology and trading, Dr. Nancy Leveson, an expert on engineering systems’ safety at MIT, pointed out that it’s impossible to create a true zero-defect system. She noted that limiting the “mayhem” that could result from system errors needs to be addressed.

Lewis’ “Don’t do it!” comment at the CFTC confab reflects that a heavy-handed attempt to regulate or impose behavior is more likely to be an obstruction to the cooperation needed between regulators and the industry. Perhaps a balanced approach will cause the glitches to crash themselves.

Robert Stowsky is senior analyst for Aite Group.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders Magazine or its staff. Traders Magazine welcomes reader feedback on this column and on all issues relevant to the institutional trading community. Please send your comments to Traderseditorial@sourcemedia.com

(c) 2013 Traders Magazine and SourceMedia, Inc. All Rights Reserved.
http://www.tradersmagazine.com http://www.sourcemedia.com/