What Is The REAL Lesson From the NYSE Outage?

Adding microsecond precision to the SIP feed, which has no real use cases for that level of precision and no way to be accurate to that level, is a waste.

“How fast is that one?” Was the obsessive fascination of my son, Ian, from the moment he could speak. (I suppose one could argue he comes by thecurious gene naturally). His first word was “car” and by three, he could name almost every car on the streets of London. It was never enough to know the make of each car, but he also had to know how fast it could go. One day, he spotted a 1966 Volkswagen Bug and like Pavlov’s dog, came the inevitable, “how fast is that one?” I explained it was pretty slow. He turned to me and asked “why don’t they put a jet engine in it to make it faster? After doing my best not to laugh, I repeated my own father’s explanation to me, at around the same age. “You see, Ian, it doesn’t matter if it’s a car or anything else, it’s all about balance. Putting a powerful engine in a flimsy car would make it quickly fall apart. I don’t know if the steering, brakes, transmission or suspension would break first, but a VW Bug couldn’t handle a huge engine.”

This conversation popped into my head after reading that the NYSE outage was “caused by a software change related to the impending SIP to move to microsecond precision in reporting.” In addition, to the obvious fact that such a change carries with it significant cost and operational risk, it is appropriate to ask,why?

Adding microsecond precision to the SIPifeed, which has no real use cases for that level of precision and no way to be accurate to that level, is absurd.

For starters, each SIP consolidates feeds from three different datacenters that are miles apart from each other. An order that originates from Carteret (NASDAQ, BX, or PHLX) will take, roughly, 200 microseconds to reach Mahwah (where the NYSE processes its SIP), due to the limits of thespeed of light. If we consider the impact of clock-time mismatches between computers, it is easy to understand why almost no OMS systems, commercial order routers, or multi-exchange analytical systems have implemented microsecond precision. In fact, despite the fact that almost all matching engines use microseconds to timestamp internal records, few applications try to use microsecond precision to compare executions and orders between exchanges. If they do, for the most part, those firms write their own aggregation algorithms and attempt to use precision, to research the sequence of events, as they appear to their systems, in each datacenter.

Despite quote update times, for most stocks, that are far less frequent than once per millisecond across all markets, the concept of measurement in microseconds was added to the recently approved Tick Pilot. I can only assume that was why the NYSE decided to “enhance” the SIP, expanding the timestamp field and “upgrade” their systems to deliver that precision.

In order to understand the level of expense we’re talking about, consider that Y2K was an addition of two characters to a date field and this change is adding three significant digits to a time field. While I am not suggesting it will be as expensive, since the systems involved are more modern, it is not going to be cheap. Thanks to the NYSE, we also know that it carries serious operational risk. I suppose we should be thankful that it was “only” the NYSE that failed and not the SIP itself, as that would have essentially caused a market wide shut-down. As previously noted, the SIP is a single point of failure in our market since many systems rely upon it to validate trading.

There will be readers who will object to this reasoning. “Weisberger, don’t we need microsecond precision in the SIP to keep up with those HFT firms that threaten us?”

The answer, of course, is no.

Without going into my view of HFT firms, generally, being market makers that add liquidity to the market, the real question is whether or not adding microseconds to the SIP will allow people to use it to detect nefarious activity.

If we consider the only way to use the SIP, in this manner, is to properly construct the actual time sequence between events that occur in three datacenters, the answer is no. The latency between the exchanges is not only 200 microseconds on average, there is also significant variability in said latency. As a result, even with slightly more ability to sequence events, the likelihood of mistakes caused by the speed of light and the variability in networking hardware, renders the value inconsequential.

From the point of view of measuring the effects of the Tick Pilot, microseconds have even less value. The tick pilot is about stocks that simply do not trade or quote that much. In these stocks, it is, relatively, rare to have multiple quote updates per second, which means that milliseconds are more than adequate. To put this in perspective, when calculating best execution statistics, as long as care is taken, to ensure that we use a pre-order timestamp, we don’t even see much difference uncovered by using millisecond precision instead of full seconds. It is certainly true that milliseconds result in less noise in the data, but the level of precision does not, materially, change the aggregate numbers. Considering that this has been observed even in the S&P 500 stocks, which trade more than an order of magnitude more actively than the stocks in this pilot, it shows how little microsecond precision will add.

As a result, I do not see how there is any way to justify this change on a cost vs benefits calculation. Of course, no one focused on this during the comment period, so it is now part of the rules. Perhaps the NYSE outage will provide a reason for the SEC and the SROs to reconsider both the changes to the SIP, as well as, the wisdom of requiring analysis with such precision. If they were willing to do so, it would be a major step in the right direction.

iThe SIP is a consolidated feed that aggregates the direct data from three key datacenters housing the 12 exchanges plus the ADF and TRFs. There is actually a SIP for the NYSE listed, ARCA/NYSE Mkt/BATS listed, and NASDAQ listed issues.

David Weisberger is the managing director and head of market structure analysis of RegOne Solutions.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders or its staff. Traders welcomes reader feedback on this column and on all issues relevant to the institutional trading community. Please send your comments to Traderseditorial@sourcemedia.com.