ITG’s Domowitz Talks TCA

Ian Domowitz, a former finance professor, is considered an expert and among the most recognized names in the business of transaction-cost analysis and quantitative trading strategies. Domowitz is a managing director at agency broker Investment Technology Group (ITG). He is also a key contributor to Below the Waterline: Uncover Hidden Transactions Costs Throughout the Investment Process, a book ITG published last week.

Domowitz joined ITG in 2001, where he is responsible for networking as well as analytical and research products. Prior to ITG, Domowitz was a finance professor at Pennsylvania State University. Before that, he held positions at Northwestern University’s Kellogg Graduate School of Management, Columbia University, the Commodity Futures Trading Commission, the International Monetary Fund and the World Bank. He recently spoke with senior editor Nina Mehta about some of his latest research, as well as current issues that traders and portfolio managers face regarding trading costs.

Traders Magazine: So are buyside traders doing more transaction cost analysis these days?
Ian Domowitz: Yes, definitely. It’s difficult to judge the overall penetration of TCA into the buyside institutional client base, but it is as high as 98 percent for large firms and roughly 88 percent for midsize firms. When times are tough and returns low, the cost of implementation in the market is even more important than it is in bull a market when returns are relatively high and it looks like people are capturing a great deal of alpha.

 

TM: Are traders changing the way they use TCA as a result of the financial crisis and greater volatility in the market?
Domowitz: We’re seeing an evolution that doesn’t have anything to do with the financial crisis in and of itself. That evolution is not away from post-trade reporting, but post-trade TCA is becoming more like performance attribution for the trading process. Generally, though, TCA is becoming an analysis of what’s going on overall in the investment process. And the demands of that analysis are greater in terms of depth and granularity.

The big movement now is to question how you actually control transaction costs. That can’t be answered through post-trade retrospection. You must think actively about the notion of controlling costs at the level of the trading desk. What does pre-trade analysis now look like? What does real-time TCA actually represent? How should you deal with that information? If you’re going to reduce transaction costs, you must do that over the course of the order and not just look at the costs from a post-trade perspective.

TM: Is TCA becoming more important earlier in the trading process?
Domowitz: The short answer is yes, but let me back up. There are three phases to controlling transaction costs. The first phase involves what I call lessons from history. These are lessons you can learn in advance that shed color on market you’re about to face. For instance, we’ve had a lot of demand recently for studies that require a fair amount of data–to examine things you can’t do on the fly during the trading day. This could be looking at the relative performance of algorithmic trading strategies, on average, in different conditions. Another issue is what it might mean to grab a large piece of liquidity out of a dark pool as you’re working a trading strategy.

The next phase of the TCA evolution involves the submission of the order–what might be thought of as standard pre-trade analysis, if there is such thing as standardization in that world. Given your blotter of orders, whether it’s a list or single stock, how do you choose the best strategy and work that order to minimize frictional transaction costs? There are many tools one can employ. We market a suite of tools, and so do other brokers. At the heart of this is looking at tradeoffs between the expected costs of an order and the risk associated with the opportunity cost of waiting to execute the order. So this involves deciding on a strategy and the horizon over which you’re going to trade the order.

TM: And the third phase?
Domowitz: The third phase, which is growing in interest and popularity, is what you do in the middle of the trade. We have a saying at ITG that pre-trade is not something you only do at the beginning of the day. Pre-trade models are models based on historical information. These models use imperfect data in an imperfect world to try to get better results. During the course of an order, what happens is rarely, if ever, what the model predicted precisely. So how do you deal with deviations from your expectations? How do you adjust the way you handle the order during the day? How do you get information from an electronic market that’s sufficient to guide those decisions?

TM: So are people making these decisions about how to change their trading during the day or is this now built into the algorithms?
Domowitz: There will be a natural evolution. Algos began as information vehicles. Once upon a time, there was no algo for the volume-weighted average price. The VWAP was just a number. You traded manually to match that number. As information tools get more sophisticated, they open up the possibility of new algorithmic trading strategies.

TM: I heard that pre-trade cost analysis went out of window last fall because it became less reliable. Is that correct?
Domowitz: We found people using it quite a bit. We track the usage of tools we give our clients, and we didn’t see a decline in the number of requests for pre-trade analysis on a security-by-security basis.

TM: Did you make changes to your pre-trade cost models, and the inputs that fed them, to try to make the estimates more reliable during last fall’s volatile markets?
Domowitz: We don’t wait for market crisis to recalibrate our models. We recalibrate on a regular basis. There was much-higher-than-normal volatility last fall. There were two historical peaks reached by the VIX index following the collapse of Lehman Brothers. All models suffered in their ability to make predictions because they weren’t predicting that level of volatility. But how well a model copes with that volatility as an input to forecasting transaction costs is a different story. Any model that was based on historical data and that estimated with historical data that didn’t reflect those types of events, which previously hadn’t been observed, certainly suffered.

TM: So in times of crisis, should traders just rely on models being recalibrated, or should they be more wary of pre-trade estimates?
Domowitz: It’s hard for me to answer that question. Volatility has quintupled, so if you’re looking at an event that’s six standard deviations from the average, the question is, do I trust anything? The answer is no. There’s no model that performed as well as we’d have liked to see it perform after the Lehman collapse, at ITG or anywhere else.

TM: You and three colleagues recently wrote a paper called “Portfolio Optimization and the Cost of Trading” about how portfolio managers can use TCA to help them create portfolios. What is the main advantage of doing this for PMs?
Domowitz: They can get an improvement in realized returns, a reduction in risk, more diversification and lower turnover in an optimized portfolio. Take diversification. The notion is that when you’re optimizing a portfolio, you’re balancing expected return against risk, or alpha against risk. But developing an optimized portfolio based on paper returns doesn’t take into account the cost of creating the portfolio and implementing the investing decision.

If you don’t think about transaction costs in creating that portfolio, you could end up with a portfolio heavily weighted toward a single stock with a large expected return. And it might be extremely expensive to buy that stock. If you take transaction costs into account, you might buy less of that stock, or buy five stocks that, taken together, may give you a better outcome and reduce the amount of that particular name. The overall cost of implementing the portfolio would be lower, and the overall return would be higher.

TM: But a portfolio wouldn’t be diversified with a holding of one stock.
Domowitz: Right. But you may have a very large holding of one stock. That stock could be 5 percent of your portfolio, and it could be very expensive to buy.

TM: Are some firms doing this now?
Domowitz: Yes. The notion that transaction costs should be taken into account dates back to a paper by Gerald Pogue in 1970. We tried to answer the question of why PMs should do this. The type of asset manager that does this is a quantitative manager. They tend to pay attention to the optimization of portfolios–as opposed to simple stock picking–in the first place.

TM: So the idea is that PMs would be making decisions about their portfolios based, at least in part, on pre-trade cost estimates that are never going to be wholly reliable?
Domowitz: If you want to optimize the risk-return tradeoff in a portfolio, what you do about returns and what you do about risks? You have to forecast returns and forecast risks. The basic inputs used in constructing a portfolio are forecasts. So you forecast transaction costs in same spirit that you forecast returns. You’re making a decision today about something that will happen tomorrow.

TM: Do you think this shift is occurring because PMs are interacting more with the trading desk or because portfolio optimization as a discipline is evolving?
Domowitz: In my opinion, it is because of growing interaction between portfolio management and the trading desk. As trading desks on the buyside have started to illustrate their value, as opposed to giving orders to someone to execute, they’ve had more conversations with PMs, who in turn are being educated about what’s happening with their orders during the trading process. Nowadays, even the selection of benchmarks for portfolio evaluation affects what goes on in the portfolio-investment stage.

There are also more conversations taking place around some of the same problems that traders and PMs face. PMs understand that they have to maximize their expected risk-adjusted return. The implementation costs are precisely the costs that trading desks are supposed to minimize. So trading desks are trying to minimize the expected costs of PMs’ investment decisions on a risk-adjusted basis. The ability of PMs to incorporate transaction costs earlier in the investment process comes from communication with trading desks. That’s key, and that’s something new. There’s also more movement on the PM side to recognize the importance of these costs in terms of portfolio construction.

TM: Did you look at actual portfolios and how they fared in your study?
Domowitz: No. These were paper portfolios. We looked at a 130/30 strategy based on the Russell 2000 value index, the same strategy with rebalancing over a two-year period, and three global portfolios by region in 2006, 2007 and 2008. We looked at the portfolios with and without implementation costs being taken into account, and we looked at the last three from two different perspectives, so there were a total of eight portfolios we constructed. The result was that the investment strategy changes when you take implementation costs into account.

TM: Were there any surprises from your study?
Domowitz: It depends on your definition of surprise. The surprise is you get a systematic set of results. Compared to formulations that ignore these trading costs, you increase the net returns, broaden diversification and create more stable portfolios. Academics may have suspected this, or may have shown, sporadically, that there would be changes in portfolio weights if transaction costs were considered. But the combination of these results in our study was powerful.

In the end, transaction-cost reduction is not only the responsibility of the trading desks. The mitigation of those frictional trading costs leads to an improvement in trading returns, and that begins with the inclusion of transaction costs in the stock-selection process. That’s where you start. You don’t start with handing the order to the trader and saying go.