Note: Quandl is now Nasdaq Data Link. Visit data.nasdaq.com for more information.
This is the third part of our interview with a senior quantitative portfolio manager at a large hedge fund. In the first part, she discussed the theoretical phase of creating a quantitative trading strategy. In the second part, she described the transition into “production.” This interview received so many excellent questions that we’ve dedicated an entire post to the answers.
You can read the first part of the interview here and the second part of the interview here. Readers’ questions have been lightly edited for clarity.
How do you monitor and manage your model once live? What additional checks and procedures do you use?
I’m a big believer in manual PL reconciliation as a diagnostic tool. I like to know, every single day, exactly where my PL is coming from. What richened, what cheapened, by how much, and why. This gives me confidence that the model is working as designed, and it serves as an early warning system for bad news.
Next: on my desk we use a “trading buddy” system where one of the other traders knows everything there is to know about my model and positions. He/she also tracks my trades every day, so that’s an extra, independent pair of eyes.
Finally: I try not to recalibrate my model too often. That’s a slippery slope in the direction of curve-fitting. But I do try to second-guess myself all the time: question my macro assumptions, talk to people with contrarian points of view, and so on.
The combination of watching my own trades like a hawk, and conversing with intelligent, skeptical colleagues and counterparts seems to work pretty well for me. I’m sure there are other ways to do it.
(None of the above, by the way, should be construed as a replacement for an excellent and independent risk management team, or for desk-level monitoring.)
Do you set up predefined monitoring rules or circuit breakers that take the model out of action automatically? If so, how do you construct these, what kinds of measures do you use in them?
I’m kind of old-fashioned — I don’t believe circuit breakers really work. Or to be more precise, portfolios with programmatic circuit breakers underperform portfolios without, over the long term. The reasoning is that circuit breakers stop you out of good trades at a loss way too often, such that those losses outweigh the rare occasions when they keep you out of big trouble.
(If you could somehow calibrate your circuit breaker to only stop you out of catastrophes, you’d effectively have built a “max-deviation-indicator.” As with perpetual motion machines, this is a reductio ad absurdum argument.)
Note that I’m talking about classic quant arb portfolios here, not electronic execution or HFT. In the latter cases, I can totally see why you’d want multiple fail-safes and circuit breakers — those books can get away from you really fast. But that’s not my area of expertise.
Within my area, I’ve observed a few patterns in models that break. For starters, they rarely blow up instantly. Instead, either the opportunity just gradually disappears (arbitraged away by copycats), or the spread slowly and imperceptibly drifts further and further away from fair value and never comes back (regime change).
Conversely, if a trade diverges and then the divergence accelerates, that smells to me much more of a capitulation. In those cases, I want to hold on to my position and indeed add if I can.
So the paradoxical conclusion is that the faster a model loses money, the more likely it is to be still valid.
EXCEPT if the losses are due to a clear exogenous event — China changing the Renminbi peg, or Iraq invading Kuwait, for example. In that case you’d expect rapid losses and no immediate prospects of a recovery. So you want to stop out.
Good luck programming a coherent circuit breaker to handle that logic!
This is actually a microcosm of the larger problem. A situation where a circuit breaker would help will almost definitely be one perverse enough to avoid most a priori attempts at definition.
How do you determine if the model is dead or just having a bad time? Do you know of any useful predictive regime change filters?
This was the single most commonly asked question. And I’m afraid I’ll have to disappoint everyone: I don’t know the answer. I wish I did!
For me, I use a variety of rules of thumb. Statistical tests to make sure the meta-characteristics of the model remain intact. Anecdotal evidence of capital entering or leaving the market. Other people’s positions and pain. Price action: is it nervous and choppy, or dull and arbed out? And so on.
I’ve yet to find a reliable, universal, predictive (or even contemporaneous) indicator of regime change/model death. Sad, but true.
Model deaths seem to last a period of years then come back better than ever sometimes. Do you keep tracking “dead”modelsand will you bring them back after a “revival”?
Absolutely, and this is a great point. Models do come back from the dead. US T-note futures versus cash is a classic example: it cycled between “easy money”, “completely arbitraged out”, and “blowup central” three times in my trading career. Same science in each case; all that changed was the market’s risk appetite. So, I never say goodbye to a model forever; I have a huge back catalogue of ideas whose time may come again.
What do you think of taking a more “gradual” approach to monitoring where instead of classifying models as either dead or alive (binary), you scale the amount of capital committed to each model, depending on aggregate model performance?Soyou’re never completely out of the market but at the same time you don’t devote capital to moribund strategies.
Yes, this is an interesting idea. I’ve seen PMs experiment with “Darwinian” risk allocation techniques, putting more money into successful strategies and taking money out of losers. To an extent, every good PM does this, but some are more rigorous than others. And at least one big shop that I know of is completely and unequivocally run this way.
What do you think of money management rules such as optimal betting size?
I’m aware of the literature on this (Kelly criterion and its descendants), and by and large my positions are consistent with those rules. But I use them as a sanity check, not as a primary determinant of positions.
Are either quantitative or technical strategies giving you consistent “comfortable” returns? Do you rely on onesystemor do you keep changing systems arbitrarily?
You have to keep evolving with the markets. No single system or strategy works forever.
I observe that you have already usedMatlab, Python and Excel (and presumably use C#/C++/Java) for production. Isn’t the process of shifting between different languages (likeMatlab, Python, C#/C++/Java 4) cumbersome?
It’s not that cumbersome. I typically find that the most tedious part is making sure the data flows consistently and smoothly between different apps or languages. Syntax translation is easy; data translation, not so much.
What can you do in Matlab that you cannot do in Python or vice versa?
These days, you’re right, there’s not much you cannot do in Python. And indeed, I find myself using Python more and more. But that was not always the case; the plethora of open-source financial libraries in Python is a relatively recent phenomenon.
Regarding Excel, don’t you find that even though visualization is useful, it carries a lot of operational risk (formulas not being dragged correctly, sheet not refreshed properly, etc.)?
I totally agree. Excel is fragile in many ways. It’s easy to make operational mistakes, it’s impossible to audit, it’s not very performant, it hangs at the most inconvenient times. So, you have to be very careful in how and where you use Excel. That said, I do find the benefits outweigh the many costs.
What kind of turnaround time do you expect from engineering colleagues coding up your strategy in C or Python? Both for the first cut implementation, and then fixes and enhancements?
Depends on the strategy. I’d say the median is 4-5 weeks for the first cut, and maybe another 2-3 weeks for fixes and tweaks. Some strategies are simpler and can be brought live in a matter of days. On the other hand, I remember one particular strategy that took several months to instantiate. It turned out to be super profitable so in that case it was worth it, but in general I’d want to move a lot faster than that.
Make no mistake: once you’ve found a new source of alpha, the clock is ticking. You’re in a race to extract as much PL as possible before the opportunity fades away.
I found this comment interesting: “For instance, I calibrate on monthly data but test on daily data.” I guess it depends on what you mean by “calibration”but this struck me as slightly unusual.
Let’s make it simple and suppose I’m trying to capture (slow) trends using a moving average crossover. I play around with monthly data until I get something I think works. To move to daily data, I ought to multiply some parameters by ~20 (like the moving average lengths) because there are about 20 business days in a calendar month, and others by ~sqrt(20) [various scaling parameters too dull to discuss here]. But the model should still behave in the same way. The turnover, for example, shouldn’t increase when I move to daily.
On the other hand, if I keep the parameters the same, then instead of picking up, say, a six-month trend, I’m picking up a six-business day trend. But the sweet spot for trend following most assets tends to be a fair bit slower than that so it’s unlikely to look as good. Also, my turnover will be a lot higher, but then you’d expect that. To put it another way, I’m not sure all aspects of market behavior are “fractal” such that I can just apply exactly the same model to different time scales.
Are markets fractal? Great question and one I’ve spent many evenings debating.
Personally, I think they’re not, because certain exogenous events act as a forcing function: daily margin calls from exchanges, monthly MTMs for hedge funds, quarterly financial statements for publicly traded banks. These events cause something to happen (never mind what) at those frequencies. So not all time-scales are created equal, and merely speeding up/slowing down the clock is not a “neutral” approach.
So, I’m actually very cautious about which strategies I’d do this kind of time-shifting with.
Here’s a toy strategy where time-shifting can work. Take two futures strips in the same space — maybe winter and spring wheat. Look for cases when one is backwardated and the other is in contango. Buy front low, sell back high, sell front high, buy back low. A simple, almost “dumb” strategy, but for many futures pairs it used to work well.
This is a great case for changing time scales. This strategy should work whether you sample/rebalance weekly, or monthly, or quarterly — because the decision variables are pure state, no path. We’re not looking at price histories; nor are we looking at instruments with a time component (bonds which accrue, or options which decay, or random walks with a drift). So, given that the strategy is really clean, we can get away with this kind of robustness test.
(Caveat: bid-ask is the one complicating factor here — your chosen time-scale needs to be big enough to allow for price action that overcomes friction. Bid-ask is the bane of quants everywhere.)
But I would never apply this same test to, say, a trend-following strategy. That would raise all sorts of philosophical questions. What does it mean for a strategy to have a “sweet spot” at say nine days, or 200 days, or whenever? By optimizing for that sweet spot, are you curve-fitting? Or does the fact that almost everyone uses 9d and 200d create a self-fulfilling prophecy, and so those numbers represent something structural about the market? I’ve heard convincing arguments both ways. What if you sampled your data at interval X, and then did 9X and 200X moving averages — would that work? Fun philosophical questions. I’m not sure of the answers myself.
Other notes: I agree that “calibration” was a sloppy choice of word by me in that particular sentence. “Ideation” would have been better. If you’re calibrating, you’re already introducing more structure than time-shifting can safely handle.
Could you give more details on the use of Monte Carlo in parameters initialization?
For most optimizations, I need to have a vector of initial guesses for the parameters – the “starting point” for my n-dimensional gradient descent. The problemis, non-linear systems tend to have local minima which it’s easy to get sucked into. You can use random jumps (“simulated annealing”) to escape these, but I find that a more robust method is to re-run the optimization many times but with different starting points. I use Monte Carlo sampling to generate these starting points: basically, pick random values for each parameter (consistent with that parameter’s distribution characteristics).
How do you scale your trading strategy? How much gain per transaction would be considered a good model? And on what time scale is it trading? What range of time scales are used in your industry? How much money can there be poured into a successful scheme, is thislimited by how much money your fund has available or are there typically limits on the trading scheme itself?
I have a few rules that I try to follow.
For a market micro-structure trade, where there’s very little risk of a catastrophic blow up but the upside is similarly limited, I’d like to make 10x the bid-ask over a horizon of less than a month. If bid-ask is 1bp, I want to make 10bps with a high probability of success (after financing costs). The binding constraint on these trades is usually balance sheet: I need to make sure that the trade pays a decent return on capital locked up.
For a more macro/thematic trade, I choose my size based on max loss and PL targets for the year. Bid-ask and balance sheet are less relevant here; it’s all about how much can I afford to lose, and how long can I hold on to the position. Obviously I use very fat tails in my prognosis.
Incidentally, optimal scale changes over time. I know some of the LTCM folks, and they used to make full points of arbitrage profit on Treasuries over a span of weeks. A decade later, that same trade would make mere ticks: a 30-fold compression in opportunity. You have to be aware of and adapt to structural changes in the market, as knowledge diffuses.
Regarding time horizons: I personally am comfortable on time scales from a few weeks to a few months. The two best trades of my career were held for two years each. (They blew up, I scaled in aggressively, then rode convergence all the way back to fair value). My partner on the trading desk trades the same instruments and strategies as I do, but holds them for a few hours to a few days at most. So, it’s all a matter of personal style and risk preference.
Regarding capital deployment: I work for a large-ish fund, and the constraint has almost always been the market itself. (Even when the market is as large and liquid as say U.S. Treasuries.). There’s only so much you can repo, only so many bids you can hit, before you start moving the market aggressively against you.
I was wondering how to interpret “My partner on the trading desk trades the same instruments and strategies as Ido, butholds them for a few hours to a few days at most.” Is it fair to say that you are running quant strategies but that the execution/positioning/rebalancing are done on a discretionary basis? Or do you mean that he is calibrating his models such that they take trades in tighter neighborhoods around an equilibrium value but also have tighter stop outs?
A bit of both. My execution/positioning/rebalancing is largely discretionary — albeit informed by lots of research and calibration and thinking about limits. His execution is more mechanistic: he has programmed a set of rules and just follows them.
Also, we use similar “flavors” of models but they’re not exactly the same. He does smaller trades for quicker opportunities with tighter stops. And he’s willing to recalibrate “fair value” much more rapidly than me. In some specific areas, he’s almost a market-maker (that’s how he defrays bid-ask). I’m not; I’m definitely a price taker.
Do you have advice for someone who just started as a quant at a systematic hedge fund? How do I becomereally goodat this? What differentiates the ones who succeed from those who do not?
In a nutshell: intellectual discipline. By which I mean a combination of procedural rigor, lack of self-deception, and humility in the face of data.
Quants tend to get enamored of their models and stick to them at all costs. The intellectual satisfaction of a beautiful model or technology is seductive. It’s even worse if the model is successful: in addition to emotional attachment, you have to contend with hubris. Then one day it all comes crashing down around you.
The reason I’ve been successful in this industry over decades is that I have a keen sense of my own ignorance, and I’m not afraid of appearing a fool. I ask dumb questions, I question everything, I constantly re-examine my own assumptions. This helps me re-invent myself as the market changes.