In this edition of SigView – a Quant’s Perspective, we have the privilege of speaking with hedge fund manager and machine learning expert Dr. Dario Villani. Before founding Duality Group he was responsible for multi-billion dollar portfolios at hedge fund behemoth Tudor Investment Corporation. In addition to running Duality Group, Dario is a visiting professor in the Department of Mathematics at King’s College London.
Tell us about your background as a quant.
I have a Ph.D. in theoretical physics, specializing in high temperature superconductivity. After getting a master in finance from Princeton, I started trading commodities and then macro. I was a discretionary trader but I used a lot of quantitative tools to find relative value opportunities and for portfolio construction. In 2017, my business partner Kharen Musaelian, a theoretical physicist himself, and I decided to start a fully systematic firm, driven by machine learning architectures. We both felt machine learning was mature enough to be effectively used for financial time series and returns forecasting. Further, after a long journey in discretionary trading, we had renewed enthusiasm for a research endeavor, full of challenges as deep and interesting as the ones we dealt with during our physics careers.
Tell us about Duality Group.
Duality Group is a hedge fund which is, at the moment, exclusively dedicated to systematic trading of US equities. We started in 2017 and launched the Fermi Fund in July 2018. We have offices in Miami Beach and New York City. Our culture is built around the brave pursuit of transformative ideas, while preserving the creative brew that makes breakthroughs possible. All of our machine learning software has been built in house, and we have accumulated a lot of IP that would have impact beyond finance, if we were to publish our findings.
What are your core principles when building quant models? What makes them unique?
Markets are very non-stationary and there is very little signal embedded in a lot of noise. So, I don’t have any ambition to finally find some incredible model. It’s all about having an architecture that allows the system to adapt over time and assembling a lot of mediocre models in a robust way. We like our models to have online updating as opposed to some form of batch learning.
What are the latest technological advancements you are making use of in your research?
A key element of our research agenda is the tech stack we have built. Without it, the research would be constrained in its scope, depth and ability to go into production. The renaissance in machine learning adoption in the last 10 years is intimately connected to the advances in technology and ability to process, in a very effective way, millions of forecasts, and all the complexities that come with it.
What is currently at the top of your research agenda?
We have three dimensions along which we can increase diversity of our models: features, forecaster type, and partitioning, which is how we group stocks together. One set of features, one forecasting architecture, and one partitioning define a model ensemble, with hundreds of thousands of models generated by walking along each “dimension”. How you select features among many, sometimes collinear, and how that choice evolves over time are key questions to answer. It also has repercussions on how to do a similar selection of forecasters.
What is your take on alternative data and its future potential impact on the investment management industry?
Even more traditional data sets can have an alternative way of being transformed. Ultimately, any data goes through a transformation, and different transformations can elicit very different results and interactions. You can listen to an hour-long conversation or make a Fourier transform of the recording and immediately detect subtle differences of tone. We are at the very early stages of using what people commonly refer to as alternative data. I am confident there is a lot of value there. It will be much more interesting after having done so much work on alternative transformation of more traditional data.
What are the key challenges you see investors facing in their day-to-day operations when developing and maintaining quant models?
It is a very complex endeavor to develop and maintain quant models. It is important to have shared knowledge, so that you are not as dependent on any developer or quant. Good documentation is key; but very hard to maintain in a world where resources are limited and the pace required to keep up with ever evolving markets is so fast.
Where do you see the most promising opportunities to generate alpha in the current market environment?
We have a measure of the opportunity set or strength of signal, but at the end of the day, markets change all the time. Systematic trading is all about capturing small and persistent anomalies and adapting to new regimes. Once you get that right, it is always a good environment. The rest is just staying ahead of the curve in terms of constantly adding new models along the 3 “dimensions” I referred to before.
Disclaimer
Sig Technologies Limited (SigTech) is not responsible for, and expressly disclaims all liability for, damages of any kind arising out of use, reference to, or reliance on such information. While the speaker makes every effort to present accurate and reliable information, SigTech does not endorse, approve, or certify such information, nor does it guarantee the accuracy, completeness, efficacy, timeliness, or correct sequencing of such information. All presentations represent the opinions of the speaker and do not represent the position or the opinion of SigTech or its affiliates.