A.I. & Machine Learning: Investing in Tech on Wall Street (w/ Hari Krishnan & Vasant Dhar)


HARI KRISHNAN: My name is Hari Krishnan. I’m a fund manager at Doherty Advisors, and
I’ve been on this station before talking about ETFs and dangers in the VIX and the VIX markets. It’s a pleasure to introduce Vasant Dhar,
who is a founder of SCT Capital, one of the first machine learning hedge funds in existence,
a professor at the Stern School of Business at NYU, and a director of the PhD program
in the Center for Data Science, also at NYU. It would be interesting for me at least to
know how you got started in machine learning, and how you got started in finance, two apparently
disparate areas at least back in the day. VASANT DHAR: Yeah. Well, great to b, Hari. Great to be having this conversation. Strangely enough, I got into machine learning
because of Nielsen, the media company. They have a household division and they were
tracking lots of households and what they were purchasing and they gave me this data
and said, “Can you find some interesting patterns in it?” This is like 1990. I said, “To what end?” And they said to me, “We’d like to do how
new products, we’d like to know how new products do, what products do well, what their selling
patterns are.” It went off for a few weeks, and we had a
meeting and they said, “Vasant, what’d you find?” I said, “I found lots of things, but I can’t
explain them. Such as a lot of older women in the northeast
shop on Thursdays.” They said, “Oh, yeah, that’s coupon day. What else did you find?” I was really excited because I hadn’t actually
told the machine to look for any such thing but there were reasons behind the patterns
that were found. That just led to more interesting stuff in
the data. I became a believer in these machine learning
methods, because they seem to be finding interesting stuff. Then fast forward three years, I was introduced
to a gentleman called Kevin Parker who’d been hired, who’d been appointed by John Mack who’s
running Morgan Stanley at the time to run tech and Kevin was a big believer in technology,
and he brought me in to implement machine learning at Morgan Stanley. I think I brought machine learning to Wall
Street in the mid-90s. The two problems we were looking at was customer
data, and then financial market prediction. I did both of those things and gravitated
towards the market prediction side of things. If you know anything about proprietary trading
groups, they want to know everything you know but they don’t want to tell you anything they
know. I proposed a simple experiment to them. I said, “Just give me all the trades you’ve
done in the last few years and I’ll tell you if you could have done better,” and they said,
“You don’t need to know anything about the strategy.” I was like, “Nope.” I took the trades. I did some Hocus Pocus, let’s call it, but
I literally amended the trades with market state information and I cranked this generic
rule learning algorithm I’ve been working on at that time, the tools were far and few
between and you had to build your own. It came up with these patterns. I remember I went to the first trading meeting. It was the same dèjà vu all over again,
Kevin saying, “Vasant, what’d you find?” I said, “I found a bunch of things, but I’m
not sure what they mean.” “It’s all right, take it from the top.” I said, “Well, when the 30-day volatility
is in the last quartile, your trades are three times as profitable as they are otherwise.” There was silence around the room for a little
while and then they tried to digest the implications of it and started talking to each other. I was just watching this and I said, “Can
someone tell me what’s going on?” They said, “Not really, but we’ve felt that
we lose a lot of money when volatility spikes. It’s interesting that you’re telling us that
volatility actually matters.” Now, the interesting thing about that incident
was that I only learned the reasons for why I discovered the pattern much later. That was the first lesson, which is that when
you have this data driven approach to life, you find that patterns often emerge before
the reasons for them, because– HARI KRISHNAN: Let me ask a quick question there. Which is, if let’s say, I’ve been trading
for 20 years, am I better off bringing in a machine learning expert who’s never traded,
who might find some unbiased, as you might put them, structure in the data than bringing
somebody in who actually knows a lot about finance, and might have certain prior expectations
that might be valid, might push them in a certain direction? VASANT DHAR: Great question. Now, remember, the space I got started was
I didn’t actually come up with the strategy. The strategy already existed, which is what
you’re saying, you’ve been trading for 20 years, you’ve got a strategy and you bring
someone in. Now, when you bring someone in, they’re going
to analyze your strategy, but they’re not going to actually develop it. They’ll analyze it and they’ll tell you if
you could have improved it. That was what I really did. It was an easier problem that I started with
than let’s say building my own strategy, which was the next step. What they said was, “Hey, that’s interesting. Do you think you can get the machine to discover
new strategies?” I said, “Sure, in principle, that should be
possible.” That’s what led me down this path to where
I am since that time. It was initially an improvement, an overlay
on an existing strategy, but I need to know anything, but I needed to know machine learning
to do that and I needed to apply the method, the scientific method correctly, but I didn’t
have to do anything creative. The machine did the heavy lifting for me after
I told it what I thought might actually distinguish good trades from bad trades, it could then
find that for me, but I did very little. I just told it well, consider volatility,
consider trend, consider stochastics, like the usual thing and found that for me, but
it’s a completely different ballgame when you don’t have that and you have to start
from scratch and get the machine to actually discover these strategies for you. That area is much more treacherous and you
have to be really careful in how you do that. HARI KRISHNAN: Got it. I know obviously, there’s some famous unnamable
hedge funds that do focus on hiring people who don’t have experience. I presume, as you said, that they’re simply
trying to improve existing processes, models and trading systems instead of trying to build
something from the ground up. VASANT DHAR: Correct. HARI KRISHNAN: Well, that’s a very important
point. Now, I occasionally dip into the internet
and Google search this and that and the other one I’m going to have some time to kill and
I see that everyone wants to hire a machine learning graduate PhD expert and so on. If I were sitting on the other side of the
table, which I am occasionally, I would be scratching my head saying, “How do I know
what’s real and what’s fake?” Fake is a strong word, because there’s always
some level of confidence, some probability. What’s your vibe? What’s your take on this whole question? VASANT DHAR: Yeah, that’s the central question
in machine learning, where should you trust what it’s telling you? What’s often overlooked about machine learning
is that as a problem gets harder to predict or as it gets noisier– I look at the world
in terms of predictability spectrum, completely random to completely predictable so all problems
lie on this. As you move towards the randomness end of
the spectrum, your models can become very unstable. What that really means is that if you change
your training set, the data on which you’re going to build the model slightly, the model
is generated by the machine can actually change quite dramatically. If that happens, you really shouldn’t trust
the machine, you should not trust that model. If you get like a high variance in what’s
generated, and we’ll come back to what variance really means, but if you get this high variance
in what the machine is generating, you shouldn’t trust it. That’s the core question that we focus on
is, I focus on is when should you trust the machine? When should you trust the outputs of a machine? My very simple answer to it is when there’s
stability. When there’s a stability in the outputs, and
so when you get to that point, you can say, “All right, I think I’ve– but that’s a necessary
condition, but not a sufficient one.” You need stability, to have some confidence
that I’m not going to get a completely different set of trading decisions if I changed my training
data slightly. That should give you cause for discomfort. HARI KRISHNAN: God, one more primitive way
to think about it is to say, well, stability must be related to some information criteria. I don’t want to get too fancy, but if I have
a really simple model, and it works, maybe it’s automatically more stable. Where do you beat that? VASANT DHAR: Yeah, exactly. You’ve gotten to the heart of it, which is
I said, it’s necessary but not sufficient. The simplest model could be, you always bet
the average. That’s a simple model. It has zero variance. You will always do the same thing. It’s probably isn’t useful. It has heavy bias, and it probably isn’t very
useful. You’re trying to tease apart the structure
in the space into like, good longs, good shorts. That means that you’re introducing some level
of complexity now over and above that simple like that the average model. You’re now introducing a little bit of complexity
for more predictability and you might now sacrifice some degree of explainability for
that increased predictability that you get from the complexity. HARI KRISHNAN: Is that trade off the art of
this business or is it something that you can quantify in some way? VASANT DHAR: You always want to quantify something
like that. How successful you can do it is of course
questionable, but it is something one should be able to or at least measure parts of it. Complexity for sure, you can specify how complex
you want a model to be, how complex you want the machine to be able to– the complexity
that you want it to be able to work with. You can specify that depending on the form
of your model, the parameters would vary. You should be able to look at analyze the
variance of the model. Since I’ve already mentioned variance, let
me just sketch it out, like variance has two types, it’s the variance in performance of
the model. If you change the input set slightly, how
widely does your output performance vary? That’s the– or rather the variance of the
performance, how high is that? The other part is your decisions, how do your
decisions change as a function of small variations in your training set? Because if your decisions change a lot, that’s
also indicative of instability even though your performance may not change. HARI KRISHNAN: Got it. VASANT DHAR: Those two elements is what I
look at as variance, it’s variance in performance and it’s the variance in decisions. HARI KRISHNAN: I’m going to jump around a
little bit and ask another question, which is if I were a viewer of this show, and I
wasn’t an expert in machine learning and somebody sent me a big bank sell side report showing
the performance of a machine learning algo in a given market, let’s say currencies or
rates, what can I actually do with that? Is that totally useless? VASANT DHAR: Well, the question is, is it
real or is it simulated? HARI KRISHNAN: If it only give me the results. VASANT DHAR: But is it real? It’s actual trading performance, or is it
this is what I would have achieved? HARI KRISHNAN: This is what I would have achieved. VASANT DHAR: Well, that’s very difficult to
trust just by looking at it because you really have to peel it apart and understand what
was the methodology? How many times did you look at the data? Did you follow a process and I think this
is the first time I’m talking about process I’ll get to that and I’ll explain what I mean
by that, but did you follow a standard process in how you generated this set of outputs,
as opposed to, well, let’s try this. Oh, that doesn’t work too well, let’s try
something else and oh, now it looks great. There’s a famous saying that I never saw back
test I didn’t like. How often have you seen a really poor back
test being marketed? You don’t. HARI KRISHNAN: I used to go to a series of
talks where every talk– I’m actually cribbing off somebody else, ended with a graph that
started at the lower left corner of the page, of the slide, and then wound up at the upper
right corner. VASANT DHAR: The short answer to that question
is I wouldn’t trust a back test unless [indiscernible] my own and I know exactly what the assumptions
were that went into it and the process that was followed, and one of my goals in life
has actually been to get to the point where my back tests and reality are indistinguishable. My back test don’t look particularly impressive,
but I trust that this is what I’ll achieve in reality. By not too impressive, I mean that a back
test is a point estimate. It says, your expected information ratio is
point six, and while we’re talking about it, I want to say something about this, which
is that I think anyone who’s built strategies for some time knows that they realize performance
sometimes bears very little resemblance to their back test in the short term, and sometimes
even in the long term. Your objective should be that they should
mirror each other in the long term. In the short term, things are there’s a lot
of noise, things are really unpredictable but in the long term, your back tests and
reality should really mirror each other. That will be indication that you’ve got a
robust process, and that you have the right set of complexity in your model if you can
get to that point, but that’s– HARI KRISHNAN: Okay. That’s a good point. Now, debunk this idea. I knew a guy once and he had a model that
traded various equity sectors, wasn’t a machine learning model mind you, but then it did various
things that would be now commonly known factor modeling this, that and the other. Every now and again, he’d have a period of
underperformance so he traded 10 sectors, whatever. He would gleefully call people up and say,
oh, I’ve improved my system by throwing out the sector that didn’t work when the model
didn’t work. That seems a bit overly greedy in the language
of algorithms. What do you find are the dynamics of the algorithms
or the systems that you look at? Do you just throw out a model that doesn’t
work well for a while or do you believe that there is some regime dependence that may make
it valid at some point in the future? VASANT DHAR: This question used to drive me
crazy before I got to the point where I developed a process for generating my strategies. Because I started using machine learning in
the ’90s but for the first 10 years or so, a little maybe 12 years, the models I created
were human curated. I’d look at the output of the machine learning
algorithm. Then I would say, “Well, let me reduce the
complexity here. Let me round it off here, let me do this,
make all these changes, and it would be a human curated model.” I found that over time, there was generally
a degradation in performance. We didn’t go smoothly down, it would go like
that. Every time it went like that, I’d think it’s
working again, and then we’ll go like that. I just realized, and this is common knowledge
in the theory of finance that humans just tend to be exceptionally poor at making decisions
about timing and direction and I was no better than that. It used to it was really a vexing problem
where every month I’d have to make a decision as to which models to field and which ones
to put away and how much risk to give them? Even though the optimization, it doesn’t really
make it easy because I had to make these decisions. HARI KRISHNAN: That branches in two directions. VASANT DHAR: It does, and my initial direction
was, well, let’s just use a mean variance optimization approach and the models that
have been performing poorly will automatically get dropped out because the expectation is
low and negative. That has its own set of issues, which is that
your risk exposure could then swing around like crazy because all your equity models
are doing well and your fixed income aren’t. That has its own set of problems so that was
not the solution. What I finally converged on as a solution
was not to use optimization, but actually develop a process. That were at regular intervals after you accumulated
more data, you keep accumulating data. As you accumulate more data, you learn from
that additional data and you retrained. To me, that was a much more graceful way of
dealing with alpha decay because you’re saying, “Well, I’m going to follow a process and I
realized that the world changes, and I’m just going to keep incorporating more and more
of the world, the data into my process. As long as I trust my process, I should trust
the outputs that emerge from that process.” That was my way of dealing with this issue
of alpha decay, and what do I do. What it made me realize is that it’s you need
two things, you need ideas, but you also need a process. It’s those two things together that make up
a strategy. That is without a process, you are groping
and probing and exploring, but you don’t have a firm criterion for what you should believe,
which is what a process gives you. HARI KRISHNAN: Well, I don’t get worried that
much in the markets but what I do, what does worry me particularly is this tendency that
we’ve had since 2012, let’s say, of incredibly quiet markets. On the one hand, I know sounds naive, you’ve
seen a huge increase in the amount of data, you’ve seen Moore’s Law, or some super linear
growth in computational power, and so on and yet it could be argued that the data set that’s
been collected over these years is not all that rich because markets have been artificially
in zombie land, whenever it’s called, zombification is the word. If the regime changed, the machines that are
learning gradually as the data comes in, so you can think of things– again to use a fancy
phrase and a Baisean mindset where you have an inference engine, it’s learning all this
stuff, I met Vasant 10 times, the 11th time, he’s rude to me or I’m rude to him, let’s
say, but it doesn’t make me revise my opinion of him that quickly because I have this huge
aggregation of prior experience. Now, if all of these inference engines or
learning algos are loading on the past three years, five years, even eight years, who’s
to say they won’t be exposed in the next crisis? How do you account for that? How do you account for breakpoints, discontinuities,
and the fact that everyone is probably chasing at some course level, similar traits? VASANT DHAR: You’re getting at a really interesting
question. This is where the human input or the creativity
comes into the picture as part of the modeling process. What you’re really saying is that what we’ve
observed is one realization out of many possible realizations of reality. We’ve just observed this one thing, and the
one thing we’ve observed is markets during a period of declining rates. Now, what you should really be doing also
is looking at markets during periods of rising rates. You go to, let’s say to 1994, and you go to
1997 and you go to 2004, where you’ve actually had periods where the Fed was tightening. You need to exercise some human oversight,
and by the way, this could be also part of your process is that you need to exercise
some oversight in terms of how you’re going to guide the machine, or how you let the machine
do its thing. You’re right, if you just give it a period
of declining rates, it’ll just tell you, you should have been long this whole time. Being long bonds would have been awesome,
but that doesn’t mean being long bonds going forward will be awesome. It may or may not? It’s not known. Rates go to zero. Sure. It’ll be an awesome trade. If rates remain where they are, and I think
the last four or five years, all the experts have been saying rates would go up, but they
haven’t. They’ve gone up, but they’ve gone back down. They’ve remained pretty much where they were
like several years ago, give or take. It’s hard to point to like– HARI KRISHNAN:
This year, they’ve been going down like no tomorrow. VASANT DHAR: Right, but occasionally, they’ve
gone up as well, but you’re right. This has been largely, again, declining rates. This is what you need to account for when
you do your modeling is are you just going to ignore those periods of rising rates? That will probably unwise. You do want periods in your training set to
the extent possible that do give you that balance of potentially long and short trades. HARI KRISHNAN: How do you conceptualize that
because one could argue as I put it that the credit cycle is part of life. It’s part of markets, it can never be erased. Central banks can try erase it, but that will
just simply prolong the cycle and make it more exaggerated and so on. One could also argue that in 1994 and 1997,
structurally, markets were quite different. There was much less impact of computers trading,
ETFs hadn’t blossomed or polluted the landscape, depending on your view. Structurally there, the markets were quite
different. How would you discount a period which richens
your data says, but which is either old or perhaps not so reflective of the current dynamics? VASANT DHAR: Yeah, I don’t think you can discount
your data set. I think you have to include as many periods
as possible of different types of regimes. Now, the question is what do you really mean
by a regime and you could say, well, a period of declining rates is a regime, a period of
increasing rates is a different regime. You need to incorporate different types of
regimes into your training data. The other question you’re raising is a really
hard one about structural differences because while you’re right that structurally, the
factors impacting the market today are completely different from those in the ’90s or even 10
years ago. For people like us who work with prices, the
implicit assumption you’re really making is I don’t care what the drivers are. They show up as these indicators of fear and
greed or whatever you want to call them, volatility, trend, whatever. It’s those exogenous factors that are making
their presence felt in terms of these things you’re measuring, and you’re saying, I don’t
really– I can’t get into the causes and analyze the structural properties of these different
markets, because then I’ve got nothing to work with, but then I’ve got four data points. HARI KRISHNAN: That’s a key point, you’re
saying that the fact that you’re using ML on price data is significantly different in
terms of the richness of your historical data set from someone who might say, well, I use
satellite images or I use shipping data, or I use– how would you address that for someone
who’s thinking about that and isn’t sure what the relative importance [indiscernible]? VASANT DHAR: I’m glad you brought that up. Because one of the things that you need with
machine learning methods is lots of data. If your data frequency– the lower the frequency,
the less you’re able to learn, the less faith you have in something that you come up with
because you’ve only seen so many instances of something. Using monthly data, it’s fine using monthly
data, but there’s a limited amount of action you can get with that because you just don’t
have enough instances. The denser your data set becomes, the better
off you are, which is why you do so much better at shorter trading frequencies, like higher
frequency trading is already machine based. Humans don’t stand a chance there. That’s already been taken over by machines
for good reason. Longer term investing, there’s no scope for
machine learning, because there’s not enough data there. That Warren Buffett investing with periods
of holding periods of years, but you don’t have enough data for that. The real sweet spot for machine learning is
in the denser parts of the price space, which is intraday daily data. When you start going way less frequently than
daily, it just starts becoming problematic. You just don’t have enough data to learn anything
from reliably. HARI KRISHNAN: Well, but let’s try an analogy
that confuses me that maybe contradicts this, but maybe it doesn’t, which is think of credit
card transactions. Maybe one out of 10,000 or 20,000 transactions
are fraudulent so the instances where you actually have a fraudulent transaction are
very low, therefore you can get a great prediction in terms of accuracy by just saying every
transaction is valid. You’re dealing with a very thin data set,
even though there’s a lot of time, there’s lots of transactions, you’re dealing with
stuff that doesn’t happen that often within that data set. Is that an issue for you as well? For example, let’s say the markets were going
up in a straight line, bonds were going up in a straight line. Is it hard to develop a bond shorting or an
equity shorting machine learning algo? VASANT DHAR: Not for that reason. The case that you’re bringing up is something
occurs very infrequently, so the base rate. That’s what it’s called. The base rate is very low, but the base rate
is 1% or a 10th of a percent, but that’s the class you’re interested in predicting. You’re not interested in predicting nonfraud,
you’re just interested in predicting fraud. To say that my model is 99% accurate, if I
just predict everything is non fraud is useless. You’re bringing up an interesting point, which
is that predictive accuracy is not what you’re aiming for when it comes to the minority class,
when it comes to predicting things that are infrequent. I didn’t finish answering your previous question,
which is– HARI KRISHNAN: Keep rolling on that way. VASANT DHAR: Why is it so hard to get good
shorts with, let’s say, the S&P or bonds? That’s not because the base rate is low which
is, let’s say fraudulent transactions are– they happen 1% or 1/10th of the percent of
the time, whereas the S&P 500 actually goes down 46% or 47% of the time. You actually do have a fair amount of days
where the market goes down, but it’s still hard to make that prediction because the problem
is so noisy, that the machine just has this tendency to amplify its long bias. That’s the reason why it’s hard. Because the problem is noisy. There’s very little signal. Let’s say the extreme case. The extreme case would be there’s no signal
in the problem. You should always be long. Your– there’s a premium so your output will
be I’m always long even though the actual distribution is quite wide. HARI KRISHNAN: People are behaving very rationally
nowadays by that metric. They have no signal. VASANT DHAR: Correct. If there’s no signal then they’ll just bet
the mean, but as you get more signal into the problem, your predicted distribution begins
to mirror the actual distribution. In the case of complete perfect signal, your
predictions are exactly the same as the actual and your two distributions look alike in that
extreme case, on the complete predictability side. On the completely random case, your predicted
distribution becomes aligned if you always make the same prediction. Think of anything else in between as a situation
where your predicted distribution is much narrower than the actual and that’s because
of the noise in the problem. The machine is not going to stick its head
out and say, tomorrow’s going to be a big update, because chances are it’ll be wrong
or tomorrow’s going to be big down day, chancer are it’ll be wrong. HARI KRISHNAN: You’re saying that formulating
the problem in a Nassim Taleb sense doesn’t really fly. In other words– VASANT DHAR: No, it doesn’t. HARI KRISHNAN: Try, instead of saying that
your minority clause is the S&P goes down one day, saying that it goes down by two or
three standard deviations. VASANT DHAR: Now, you predict the magnitude
is important. HARI KRISHNAN: Got it. Those are pretty rare. VASANT DHAR: Those are pretty rare, which
is why you don’t predict them. The machine is going to predict that tomorrow
is going to be down 3% day because chances are it will be wrong, but it will predict
the mean. HARI KRISHNAN: Okay. Well, I’ll ask a simple minded question there
then, which is that we’ve seen that systematic traders, not machine learning based traders,
but systematic traders had a real problem in August of 2007. There was a quant crisis, the equity long
short guys, now, they had the freedom, the ones who did relatively well, in some cases,
had the freedom to turn one knob which said I’m getting out. My model is not working. I’m not a human intervention trader, but I
can shut the thing off. Is that consistent with your view, or is it–? VASANT DHAR: You always have to be able to
do that. HARI KRISHNAN: You have to have that option. VASANT DHAR: You have to have that option. When Fukushima happened, we stopped trading
the Nikkei and the JGB. It was like we have no idea what’s going to
happen in Japan, just turn those models off. Acts of God, rare events where you know that
the risks are elevated and this is a complete crapshoot that there’s no way you could have
learned anything useful that will apply in that scenario, you may get lucky, you may
get unlucky, chances are you should just turn it off and get out and until [indiscernible]
returns, which is not easy to determine but that will be humanly the right thing to do. HARI KRISHNAN: I recall reading somewhere
that George Soros used to his back would hurt if he was losing too much money and he didn’t
know why so he cut his possessions. I don’t know if that’s true or just fantasy
on my part. Is that the analog here where you don’t want
humans to intervene with everything or even much of everything but at some point, you
should have the right to just reduce exposure, cut your losses, analyze everything and then
get back into the system. VASANT DHAR: You should and I tell my investors
that as well we are systematic and we always follow the machine, but there are situations
that can happen every few years. These things shouldn’t be happening every
few days a month, because that means you’re interfering too much. On average, we’ve had a few cases, the Fukushima
was one, the taper tantrum was another one where it just seemed that this is something
new. You have to make that determination, but it’s
not easy and that is something that we’ll occasionally talk about and say, does this
qualify as something just completely different? You want to most of the time be able to say
no, but every once in a while, something is different and you should have the right to
just turn it off. HARI KRISHNAN: What constitutes something
being different. Now let’s say that stocks and bonds started
going down together. Is that different enough? VASANT DHAR: No, that’s just something– that
could be just liquidation. Just people just getting out and there’s massive
liquidation going on. You could say that this seems like unusually
heavy liquidation. I don’t want to take the risk. Now, how do I define that criterion? I don’t know. If I could then it would have been part of
my algorithm. That’s one of those things where you say I
don’t know. I haven’t enumerated all possible states of
the world of possibilities here. As a human being, presumably you have the
intelligence to recognize one of these when it occurs. HARI KRISHNAN: There’s a crude analogy that
machines don’t feel emotion so I would never trade– I might trade badly, but I’d never
trade a million contracts instead of 1000. Whereas a machine barring the appropriate
filter, which naturally, in all likelihood would be in place, wouldn’t see a difference. There’s some– in extremes, humans have some
override capability that perhaps machines dealt. Is that what we’re agreeing upon here, or
is that too simplistic? VASANT DHAR: No, I think that’s fair. A machine by itself will only turn itself
off if you actually told it, like if the VIX goes above 40, just turn off. You could specify that as a rule and say about
that, to me, that means that the world is weird. The VIX is above 40, or you can define those
things. There are other things where you say, I don’t
know, I’ll just have to wait and see because the future is always new. It’s never exactly like the past even though
it might rhyme with it, as you once noted, but if it’s sufficiently different, and it’s
just something that’s weird and looks treacherous, then you have to make that decision to turn
it off. Nothing is always 100% systematic. HARI KRISHNAN: Now, before I get too depressing
about this, you’ve been doing this for many years, well over 10 years running your AQT
system, if someone wanted to do the same thing today, and they had a decent amount of money
bankrolling the thing, but they had no experience, what barrier to entry would they face? What things have you learned that they wouldn’t
know? They’d have to learn the hard way, that significantly
make a system of run by somebody like you more robust? VASANT DHAR: It boils down to experience. HARI KRISHNAN: What specifically have you
learned? VASANT DHAR: Making mistakes. It’s important to make mistakes. HARI KRISHNAN: Now, any specific kind, sizing
positions? VASANT DHAR: Where do I start? This could be a really long conversation if
I get into all the mistakes I’ve made, which probably would not be a good thing. Yeah, you learn through your mistakes. In a domain like this, there’s you can make
lots of mistakes because it’s treacherous. It’s a noisy problem. There’s a famous Richard Fineman quote, the
easiest person to fool is yourself so you have to be really careful to not fool yourself. There is a tendency for us to want to believe
in things that something works. That can be dangerous, because you want to
believe something, we desperately want to believe that stuff works. All of every bone in your body should be telling
you to figure out what’s wrong with something that seems to work, because most things should
not work. If you see some stuff that works, you really
ought to question it 10 times as hard because chances are, there’s something wrong there. That’s been a great learning experience for
me to get to the point where you believe something works and you find there’s a problem with
that. The problem can be a methodological one. Maybe there was part of your process that
was flawed, maybe you aren’t taking care of outliers properly, maybe your complexity level
is too high or maybe you’re not just taking some reality of the market into account. It’s a combination of just experience in terms
of market knowledge, and just seeing how markets behave, because unless you feel the pain and
the pleasure from actually trading, it’s all hypothetical. That gets me back to that back testing earlier. With back testing, yeah, it looks cool but
did you feel the pain when you went through that 40% drawdown like what did it feel like
when you were on that slippery slope of two left feet? Completely different from the back test. I think there’s really no substitute for experience. That doesn’t mean you can’t get lucky. You can get lucky and get hit paydirt, but
I wouldn’t count on that. HARI KRISHNAN: There’s been an elephant in
the room recently, which is, whether it’s the short term financing markets, the repo
markets or whatever, or anything to do with a short term credit facility in the markets,
that seems that it has the potential to upset all strategies that rely on leverage. How would you address that issue as someone
who is nearly purely systematic? Is that something that you have to factor
in? Do you just simply discount, lower your leverage
a bit below where you think it should be? Do you have a dynamic way to account for it
over time? Do you look at spreads? Do you look at repo spreads and things like
that? VASANT DHAR: Not in futures markets where
your leverage is inherent and more than you need. The futures markets were actually trying to
dap down the leverage because you can lever up to the hilt and really get into trouble. In those markets, you’re really levering down
to your desired level of volatility and funding costs, they tend to rise more in the form
of exchanges imposing, let’s say, higher margins of stuff like that. That’s where your costs can increase, but
that tends to be relatively rare. It’s a silver market goes berserk and the
[indiscernible] limits or increase margins. It’s those– HARI KRISHNAN: The basic argument
is that the repo issues or whatever that occurred in sometime are more impacting the cash bond
market than the markets that a CTA-type structure would trade? VASANT DHAR: That’s right. HARI KRISHNAN: Got it. Well, I guess an important question for a
lot of viewers or participants like myself is, how’s machine learning been oversold? If so, where? How can someone like you step in and help
to educate people who wants to know more about the space? VASANT DHAR: Yeah, great question. I think it has been oversold. I think some of it is just this effect from
the fact that it works in other areas, it’s making its way into navigation, into language,
into search. Machine learning become an important part
of our lives. People just assume that let’s do it in finance
as well. It’s harder to do in finance, because it’s
a noisier problem. It’s not as easy to apply it to finance as
it is to, let’s say, a domain that has a lot of structure in it. In that sense, I think it’s been oversold. It’s not like you just put in data and magic
appears at the other end. There’s a lot of care that has to go into
how you formulate the problem, how you create the data, whether you have a process in place. HARI KRISHNAN: Wasn’t one of the early validations
of machine learning or AI in general, just the human eye. In other words, you get fuzzy images from
telescopes and so on and then you could run some algo on them, which would see a huge
number of images and then make sense of what this noisy image should look like. Then it would clear it up and the human eye
could say, oh, that’s a real license plate. I can read the number, whereas it was completely
fuzzy before. Doesn’t that raise the issue that outliers,
say if a camera takes a picture, and it’s a bit fuzzy here and there, the outliers should
be smoothed out or removed? Whereas in financial markets, those outliers
are essential because they drive price action over the long– or they drive compounded returns
more to the point over the long term. How do you– do you think that part of it
has been oversold? You think there’s too much smoothing going
on and not enough feature extraction? VASANT DHAR: I don’t know whether I would
state it like that, or at that level of detail. I think that’s just one piece of the problem. HARI KRISHNAN: I’ve extracted too many features. VASANT DHAR: I think it’s an important part
of the problem, this aspect of how do you deal with outliers in let’s say vision versus
finance and it and that’s actually a whole conversation in itself that we could have. That’s a very meaty area. I think it’s that and lots of other things,
the lack of structure in the problem that just make finance infinitely more challenging
than, let’s say, applying it to vision where there’s so much structure to the problem,
the stationarity and an unambiguous relationship between inputs and outputs, like however complex
they may be, at the end of the day, people can say, well, that’s a cat and that’s a dog
and that’s a nine unless that nine is so fuzzified that even humans can’t tell, in which case
the machine can’t tell. For the training data where it’s unambiguous,
does an unambiguous mapping between inputs and outputs, that’s a great problem for machine
learning. Whereas in finance, there isn’t that unambiguous
mapping. You can have the same x with two different
y’s, by the same x, I mean the same market conditions as you specified them but the outcomes
are completely different. That’s what noise really means in finance
is the same market state but the outcomes are different. Whereas in vision, it’s a different problem. If you take something that’s worked in vision,
it’s all right, then applied in finance, it’s not obvious that those ideas really carry
over may actually get you in trouble. You may be overly optimistic about what you
can achieve. HARI KRISHNAN: Well, it’s been wonderful having
this sound on the air. I was looking forward to this and I’ll leave
it to him to sum up. VASANT DHAR: It’s been a pleasure. I really enjoyed it. Great set of questions and thank you.

10 comments

  1. hope the machines tear it all down. cleanse humanity of this toxicity. It's gone too far. Slap your local banker people, opened handed bitch slap. They deserve it.

  2. Get Real Vision Premium for only $1 for 30 days here: https://rvtv.io/Dollar30YT
    No more waiting for the content to make it here weeks or even months after it was shot and no missing out on insights and information that move markets. Better yet…. No advertisements! Join today!

  3. so it sounds like unless you have large datasets, machine learning is not very useful. I see the potential with highly repetitive things, but it seems to me there are things that machine learning will struggle with anyway.

  4. Easiest trade daily is watching algos make utterly stupid moves based on some non sense then catching an easy dip. Machines operate in a logical way and this market is anything but logical.

  5. This was awesome! Over 25 years of investing experience, I finally got my hands on some artificial intelligent software that trades in the Forex market for me. It's a Game Changer at my young age!

  6. Obviously, machine learning will "work" when all the human traders have been fired and trading is done by dumb machines. Garbage in garbage out.

Leave a Reply

Your email address will not be published. Required fields are marked *