Hacker News new | past | comments | ask | show | jobs | submit login
Quant Job Interview Questions (2009) [pdf] (kent.edu)
358 points by dmmalam on Nov 12, 2016 | hide | past | favorite | 145 comments



Quant dev here.

Note there's broadly two kinds of quants.

One is a derivatives quant, basically someone who spends a lot of time working out the values of derivatives. So you're looking at ito calculus and iterative methods to find the value of contracts with weird clauses (Knockouts, Cliquets, vanillas, etc), and then building spreadsheets to calculate the hedges for those trades as time goes by.

The other is strategy quants. Here you're looking for some way to beat the market. So you're reading in a bunch of data from various sources and applying some sort of quite open-ended analysis on it to come up with rules for a trading system. You then build a framework for trading such systems, which can get very involved eg execution algos that read realtime market data.

I started off in derivatives as a trader but quickly moved on to the other. They're both fairly deep, and both require you to know coding and math.

The questions here seem to apply mostly to derivs quants, but strategy quants have found many things applicable.


Serious question... do people really enjoy working on this stuff? Do most quants go to work thinking it is fun to work on mathematical models to optimize financial transaction. Or is it just a job that pays really well towards long term financial independence to do what they really want?


There's some interesting things about this work:

- You get to know something that isn't published. You find a strategy, you don't tell anyone. Sure, you can talk about it in broad strokes (we're trend following...) but it's pretty unlikely you'll ever tell anyone enough for them to be able to replicate it. Except people you trust, of course.

- You're always thinking about philosophy of science. How do I know that this series of steps I'm doing is not just a monkey and a dartboard? Again, there isn't going to be a specific written paper that tells you the answer.

- At one end of it, you are working with cutting edge technology. Anything that shaves a microsecond off the speed is useful, and you can spend a lot of time optimising such things. I've been staring at a C++ solution that is 100% in house. No STL.

- At the other end, you are working with data in very modern ways. All that ML stuff in the news is interesting to quants, because if there's one thing ML papers are about, it's how to avoid overfitting. Just a few years ago people in the field had not come across ML, or at least the point of view that ML brings. Plenty of other things had already come up, though, like information theory, time series econometrics, signal theory, and so on.

- About derivs quants, it's not my cup of tea, but I can see how others like it. You're basically pricing things hoping that you've noticed something the counterpart hasn't, or hoping you've found a cheaper way to hedge than your competitors, or simply hoping your salesman is a better salesman than the other guy's.


This is good stuff, thanks for articulating your experience in the role.


Honestly, this is a great pitch for your line of work. It sounds like you really enjoy it.


Yes it's an awesome job, and not just because of the money. I've worked for small shops that weren't very profitable and didn't really make a lot those years, but still loved the work.

I could see derivatives pricing being boring, not as much room for creativity there. The strategy/trading quant work is like a super open-ended science problem with a constant stream of new challenges as you try to outperform yourself or the competition.

FWIW, I don't think these are great questions for strategy quants. Some of the best people I've worked with a.) used very, very basic ML techniques b.) were just naturally smart/clever enough to engineer great features. and c.) had incredible attention to detail, didn't "hand wave" away the dirty work like checking/cleaning data or executing the model in practice.


I've heard the working hours are terrible with 12-14 hours workdays not at all uncommon. Is that true? Is the work pressure very high?


If you're making money everything is relaxed. If your strategy is losing money things will be very tense. That's just how it is when you have skin in the game.


>> 12-14 hours workdays not at all uncommon.

> That's just how it is when you have skin in the game.

MD here, and prior military. That's kind of how it is in any profession where you have skin in the game.

Also, for those interested in physics: my undergrad quantum mechanics professor stopped in the middle of lecture once and announced to all of us, the only section of physics majors (17 in all): "Odds are no more than 1 of you will get a PhD in physics. If you want to keep doing math, most of the jobs are on Wall Street and Vegas." And just turned back to the board and kept writing. Probably something about a Hamiltonian...


What's MD?


I assume doctor.


Can also mean 'managing director' in finance and other fields


> I've heard the working hours are terrible with 12-14 hours workdays not at all uncommon.

Derivatives quants have perfectly normal hours


About questions not being great, do they match what you could expect in a real world interview for a strategy quant position?


Everything in sections 1-3 looks like pretty standard entry-level strategy quant questions.


Not a quant, but:

Imagine playing a huge board game with a bunch of very smart (as in, MIT math PhD smart) players that spend every waking hour trying to come up with strategies, and every time you come up with a good strategy the entire metagame of all players quickly adapts to its existence so the game stays interesting (except for some that lose before they can adapt and have to leave the game.)

Factors in a good game strategy might include fast thinking, every historical move a player played in the game, human psychology, poker-type bluffing, reverse engineering algorithms based only on their outputs, correctly predicting the future of industries and companies, headlines on the news, sentiment analysis on social networks, predictions of election results (and stuff like the Brexit referendum), as well as the complicated rules of the game itself, and Sicilian Reasoning (http://webdocs.cs.ualberta.ca/~darse/rsb-results1.html).

Some things you might do as part of a strategy: charter a fast ship to send information to your friends in another market before it becomes public, so they can buy a lot of stuff before it's price goes up; buy a donut every week in the same shop and keep track of receipt serial numbers to estimate changes in a franchise's weekly sales before anyone else knows them; use satellite imagery of Walmart parking lots and some computer vision algorithms to get a very good estimate of their sales; borrow $10Bn from your friends and use it all to sell British pounds in a very short period of time, forcing the British government to stop pinning the pound-dollar exchange rate and making $1Bn in the process; install a spy network throughout Europe so you get information on the result of the battle of Waterloo a day before the King's intelligence service, spread a rumor that Napoleon won, buy everything that got cheaper because of the rumor, then the next day sell it when the truth that he lost arrives and use the money to more or less buy everything of value in Britain.

At each point in time there is a clear tracking of scores so you can know exactly how well you're doing relative to every other player.

And the best thing is, you get paid the best salaries in the market to play this game.

I love being an entrepreneur because of how open the rules of the game are and how many creative moves I can pull off to get advantage without having to convince some judge or committee or boss that this is a good idea. Strategic quants have all this and the additional benefits of immediate feedback and a great salary with low risk.


A good example of real-world application of the donut-receipts idea above is the German Tank Problem[0] from WW2.

[0] https://en.wikipedia.org/wiki/German_tank_problem


All the ideas I gave were things that happened (or at least, that I read on the internet from reliable sources.)

Chartering fast ships is from 1790, see e.g.: http://dewinforex.com/forex-basics/high-frequency-trading-fi...

Can't find the link for the donut investor, but I'm pretty sure it was somewhere in Matt Levine's (highly recommended) Money Stuff column.

Sat imagery of parking lots appears all the time in Matt Levine's column, there is a company selling this data to hedge funds today.

Selling pounds is of course George Soros, probably the most famous story in Finance.

Wikipedia claims the Rotschild story about the Napoleonic wars is fake. I read it on Quora here: https://www.quora.com/Whats-the-shrewdest-smartest-maneuver-...


I got most of the references, but the donut one just seems very interesting. Please tell me if anyone here manages to find it. I'll be looking too.


I was working for the leading print marketing company that was starting to have serious competitors and post-IPO we were really concerned that people were going to order business cards from us all the time (since they were almost free) to track our daily internal sales via order number. I can't say that we ever had hard evidence that this was happening but we thought we had suspicious activity once so we changed it.

An aside, changing order number to a hash value can be a huge pain in a manufacturing environment when people have (mis)used it to for a variety of purposes beyond order identification.


If you like math and strategy of course it's fun (and I really don't see how someone who doesn't like these things could work as a quant). You're getting paid to use skills you have to solve puzzles essentially.


I am working on something quant related---basically the part of the industry spawned by "Composing contrats: an adventure in financial engineering" (http://research.microsoft.com/en-us/um/people/simonpj/Papers...).

I enjoy the domain, but also that I can use functional programming languages.


Is ito calculus really the extent of the math you need to know to go into quant??


When my ex-boss was hired as a quant, he didn't know any stochastic calculus, but he was a full math professor (specialized in algebraic geometry).

So during his interview, they spent half an hour deriving the Ito integral from scratch and then proving some properties of it.


It's probably like any coding interview. You'll get asked a bunch of things that won't be a problem when you're on the job.

Basically, it's a "have you heard of this" type thing.


Oh well that makes sense, if stochastic calculus is all you need to know, i was half considering becoming a Quant there for a second ;)


For pricing, yes basically


I wrote about this a long time ago here:

https://news.ycombinator.com/item?id=8698986#8699260

I think most of what I wrote is still valid.

I believe the below reference:

http://www.decal.org/file/2945

and the book Heard on teh street are also great refresher's for the type of material you'll need.

https://www.amazon.ca/Heard-Street-Quantitative-Questions-In...

To be honest most of this won't matter, the number one qualification for becoming a quant is a Phd from a top tier university.

I'm one of the very few quants i've met that doesn't have a Phd, if you don't have one then its going to be a tough road ahead. YOu'll often hear people call themselves a quantitative developer if they come from a computer science background.

That might be the most over used phrase on wall street:) It's kind of akin to calling yourself an architect, in that one firms architect is another's junior developer. The term really means nothing and is given out like water, sort of how the engineering term gets tossed around in software dev firms.

I've got a pretty long back list of emails from hacker news to get through but if you have any questions about the job, please feel free to email. Just keep in mind that between year end, the US election and life a reply might take a while.

EDIT sorry I see that someone used the term quant dev while I was typing. No insult intended. There are some amazing quant dev people on the street.


You stress the importance of having a Phd - How much of that is because it's really needed and how much of it is a filter / credibility signal?


You need to learn the maths and for that there is nothing close to sitting down in a master/phd classroom for years, doing hard exercises for the whole duration of it, that could get you fired of the school if you ever get a single one wrong at an exam.


Is there a reason you linked the 12e of the book? Is the 16e not the latest and greatest? It does seem like they just reprint every 2 years and increase the cost $30....


Here's my favorite interview question (spent 10 years as a quant, interviewed a bunch of people, most do not do well on this)

We're going to play a game. You draw a random number uniformly between 0 and 1. If you like it, you can keep it. If you don't, you can have a do-over and re-draw, but then you have to keep that final result.

I do the same. You do not know whether I've re-drawn and I do not know whether you've re-drawn. Decisions are made independently. We compare our numbers and the highest one wins $1.

What strategy do you use?

EDIT: The answer is not 0.5, it is not 0.7 either.


Oh my god it's the golden ratio! That's so cool!

EDIT: I'll show my work, rot13'd...

Jr pna cnenzrgrevmr n fgengrtl ol n guerfubyq g: gur inyhr gung gur svefg qenj arrqf gb or yrff guna va beqre gb pubbfr gb qenj ntnva. Hfvat guerfubyq g, gur cebonovyvgl bs trggvat yrff guna g vf g^2 orpnhfr lbh unir gb qenj orybj gur guerfubyq gjvpr va n ebj. Gung chgf gur cebonovyvgl bs raqvat hc nobir gur guerfubyq ng 1-g^2.

Gur cebonovyvgl vf havsbez nobir naq orybj gur guerfubyq fb gur cebonovyvgl qvfgevohgvba bs lbhe ahzore vf:

    cqs(k) = { g vs k<g ryfr g+1 }
(Gur g+1 vf sebz fbyivat gb znxr gur vagrteny sebz 0 gb 1 or 1.)

Fnl lbhe bccbarag hfrf fgengrtl h naq lbh hfr fgengrtl g. Gura jr jnag gb znkvzvmr gur cebonovyvgl gung K~cqs(g) vf terngre guna L~cqs(h). Zngurzngvpn jvyy gryy hf gung vs jr nfx yvxr fb:

    q[g_] := CebonovyvglQvfgevohgvba[Vs[k<g, g, g+1], {k,0,1}]

    Fvzcyvsl[
      Cebonovyvgl[k<l, {Qvfgevohgrq[k, q[g]], 
                        Qvfgevohgrq[l, q[h]]}], 
      0<g<1 && 0<gh<1]
Gung tvirf hf gur cebonovyvgl gung g orngf h yvxr fb:

    c[h_,g_] := Cvrprjvfr[{
      {(1+g-(1+g+g^2)*h+(1+g)*h^2)/2,  h>g},
      {(1+g-g^2+(g-g^2-1)*h+g*h^2)/2,  h<g},
      {((g^2-1)^2-g*(g^2-2)*h)/2,      h==g}}]
Abj jr jnag gur g gung znkvzvmrf gung sbe n tvira h, juvpu jr trg ol gnxvat gur qrevingvir jvgu erfcrpg gb g, frggvat vg gb mreb, naq fbyivat sbe g:

    Fvzcyvsl[Fbyir[Q[c[g, h], g] == 0, g], 0<h<1]
Fb abj jr unir gur orfg erfcbafr gb na neovgenel fgengrtl g:

    oe[g_] := Cvrprjvfr[{
      {(1+g+g^2)/(2+2*g), g<(Fdeg[5]-1)/2},
      {(1-g+g^2)/(2*g),   g>(Fdeg[5]-1)/2}}]
Vs lbh cybg gung lbh frr gung gur orfg erfcbafr vf nyjnlf orgjrra .5 naq .618. Vs lbhe bccbarag nyjnlf xrrcf gurve svefg qenj gura lbh fubhyq hfr n guerfubyq bs .5. (Fnzr vs gurl nyjnlf gnxr gur frpbaq qenj bs pbhefr.)

Ohg gurl jba'g qb gung. Vs gurl hfr n guerfubyq bs 1/2 gurzfrys gura oe[1/2] == 7/12. Ohg gurl jba'g qb gung rvgure. Jung jr jnag bs pbhefr vf gur svkrq cbvag, vr, gur Anfu rdhvyvoevhz, jurer oe[g]==g. Gung, boivbhf va gur cybg, vf g = (Fdeg[5]-1)/2 nxn gur tbyqra engvb!


There is another way to do it, without integrals and convolutions. Say player A has threshold a and player B has threshold b, a <= b. Write 3x3 table of Player A win probabilities for ranges 0 to a, a to b, and b to 1. It will have 1/2 on the diagonal, 1 below, 0 above.

After some simplification, total Player A win probabilty is (-ba^2 + (b^2-b+1)a +(b^2-b+1))/2. Find a that maximizes this probability for any given b by taking the derivative and setting it to 0, which would be a = (b^2-b+1)/2b.

The game is symmetric, so at Nash equilibrium a==b. Now solve a = (a^2-a+1)/2a to get the golden ratio.

The lift over naive 0.5 threshold is very small, though, less than 1%.


This... rings so right... but me and probably many other programmers here lack the math to see the explanation immediately in front of our eyes when we hear the answer. Can you please elaborate?

Edit: thanks. This is convincing, but... given that we're looking for a fixed point of a process like... [bah, I lack words to describe a vague, non-rigorous intuition]... I expect that there is a convincing one-sentence explanation that also shows it to be the golden ratio. Am I wrong? Can anyone give a simpler explanation?


Your intuition is better than mine! I crunched it out (just now edited my answer to show how) but still have no intuitive sense of why it should be the golden ratio. Would love to hear some intuition for that!


Saying "It's intuitive to me that this answer someone worked out and told me is correct" is cheating.

But generally...

The process is something like:

If you choose 0.5, then 0.625 beats you because that is the expected average of someone redrawing below 0.5. If you choose 0.625, then 0.617[2] beats you because that is the expected average of someone redrawing below 0.625. If you choose 0.618, then 0.6182 beats you...

Lets ignore those numbers for a moment and focus on the process: "Given a number to redraw below, redraw below the expected value of redrawing below that number".

Trying to find a fixed point for this kind of process, the golden ratio has some kind of... mythological fit? Like, I've heard so many stories like this that ended up with a formula involving the golden ratio...

I wouldn't necessarily expect the answer to be phi, but I would so not be surprised to find it to be some formula involving phi, a rational factor, and maybe pi (just because I have absolutely no intuition for which series happen to sum to rational factors of pi).

[2] I'm getting these numbers by simulation, it's too late for me here in Israel to do math


OK I almost got it (edit: rot13 isn't very good at scrambling one-variable equations, so read at your peril, spoilers ahead):

Jr'er ybbxvat sbe n svkrq cbvag bs "vs gur bccbarag pubbfrf gb erqenj orybj k, jr erqenj orybj uvf rkcrpgrq inyhr".

V gevrq gb qrevir gur sbezhyn sbe uvf rkcrpgrq inyhr n srj gvzrf naq nyjnlf tbg pbashfrq ba jurer V fubhyq hfr k, jurer (1-k) naq jurer 0.5, fb riraghnyyl V gevrq qrevivat gur fcrpvny pnfr jurer k=0.625 naq gung jnf rnfvre:

(1-0.625) [bqqf bs abg qenjvat ntnva] * (1 + 0.625) / 2 [rkcrpgrq inyhr va guvf pnfr] + 0.625 [bqqf bs erqenjvat] * 1 / 2 [rkcrpgrq inyhr va guvf pnfr]

Fb va trareny vg jbhyq or:

s(k) = (1 - k)(1 + k) / 2 + k / 2

Gur svkrq cbvag vf jura s(k) = k, be

(1 - k)(1 + k) / 2 + k / 2 = k

naq...

uggc://jjj.gvtre-nytroen.pbz/qevyy/(1-k)(1_k)/2_k/2=k/

Guvf tvirf n pybfr rabhtu nafjre? Fb V cebonoyl unir fbzr fznyy zvfgnxr ohg unir gur evtug vaghvgvba?

V arrq gb tb gb fyrrc, fb V'yy yrnir vg sbe gur vagrearg gb svaq zl fznyy zvfgnxr, nffhzvat gung'f jung vg vf, naq V pbafvqre zl rkcynangvba orggre orpnhfr vg erdhverf bayl uvtufpubby-yriry zngu :-)


why do you seek the fixed point and not the k value that maximizes s(k) = (1 - k)(1 + k) / 2 + k / 2 ?


s(k) is the function for "The number I should choose if my opponent chooses k" [1]. We arrived at the formula by asking "what is the best number for us to choose assuming our opponent chooses k?".

Now, obviously a smart opponent would not choose a number k where k != s(k), because s(k) would be a better choice. So the only numbers it makes sense for him to choose would be fixed points of s(k), points where k = s(k). And the smartest thing for me to do in that case would be to choose s(k), which just so happens to be equal to k.

[1] which may be more or less than s(k), e.g. if my opponent chooses to always redraw, i.e. k=1, it should be the same as if he chooses to never redraw, k=0, and we expect for both cases s(k)=0.5 < 1 (and it indeed is)


Why do you bring up expected value in the calculation. You may end up with the right result, but there's no good justification for why expected value should come into play.


Indeed it is!


The problem seems ill-defined to me in the same sense that Bertrand's paradox is (https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)).

Your optimal strategy involves maximizing an objective function that depends upon your opponent's strategy. "Assume nothing" about your opponent is an incomplete problem specification, because you must assume something in order to determine which function to maximize.

For example, consider that your opponent chooses to redraw any time his number is greater than 0.5. His expected value then becomes less than 0.5, which means that if you choose to redraw any time your own number is below 0.618..., then your strategy is suboptimal.

So, we have to assume something about the opponent's strategy. Is it a uniform strategy state space? Is it a state space where both players are superrational?


Assume you're playing against me :)


This seems so simple.

Decisions and draws are independent - we can ignore the other guy and just go for the highest value. Draw the first number. If it's below 0.5, draw again, since the odds then are that the next draw will be higher.

Is there more?


I was 100% sure that redrawing below 0.5 is optimal, but a simple simulation disagrees:

    def tournament(a, b, n=1000):
        return sum(a() > b() for _ in xrange(n)) / n

    def redraw_below(c):
        x = random.random()
        if x < c:
            x = random.random()
        return x

    In [23]: tournament(lambda: redraw_below(0.5), lambda: redraw_below(0.6), n=10000000)
    Out[23]: 0.4948004

    In [24]: tournament(lambda: redraw_below(0.5), lambda: redraw_below(0.6), n=10000000)
    Out[24]: 0.4948039

    In [25]: tournament(lambda: redraw_below(0.5), lambda: redraw_below(0.6), n=10000000)
    Out[25]: 0.4948802
`redraw_below(0.5)` only beats `redraw_below(0.6)` 0.4948 of the times, very consistently, over three sets of a million rounds.

This is despite `0.5` giving a better average (0.624 vs. 0.620).

I'll think about why, but the result is very consistent and can't be ignored.


It makes sense that the highest expected value doesn't necessarily indicate the best strategy.

For example, suppose you're playing against someone who has taken the "reroll values greater than 0.5", giving them the expected value of 0.625.

If you roll the value 0.55, you expected to lose more than half the time, because their expected value is 0.625. So you should reroll anyway.


I think this is the key. Given that the game is winner-take-all, maximising your expected value isn't necessarily the right way to go. To take an extreme example, suppose I had a magic strategy that gave me a value of 1 every few throws and a lousy number most other times. That might give me a good expected value, but I'd still lose most rounds.

So what's the right objective? Trying to beat your opponent's median score, so that you win more than half the time?

Edit: I think you're on the right track, but I'm not sure I believe that you will lose more than half the time if you land below your opponent's expected value. I think that assumes their distribution of scores is non-skewed, which is not obvious to me.


I had a longer answer, but the distribution is important. A 0.5001 has a less than 50/50 shot of winning.


When all else fails, test.


I am very humbled.

The reason I'm wrong is that you are opposing another person's numbers - the expected dollars for a given number isn't linear. A .30 is worth way less than half as likely win you money than a .60 is.

Because of this, the shape of your distribution matters more than its "average" value.

I can't believe I forgot that AVG(F(X)) ain't necessarily F(AVG(X))


- edit: To explain, why not 0.5? Because the probability of your opponent having drawn a smaller number than 0.5 after two attempts is <0.5 and the probability of a larger number is >0.5. A strategy of keeping >=0.5 after the first draw will loose against the optimal strategy on average.


Definitely not 67% of cases, more like 50% + epsilon:

    def round():
        x = random.random()
        if x <= 0.66666:
            x = random.random()
        y = random.random()
        if y <= 0.5:
            y = random.random()
        return x >= y

    sum(round() for _ in xrange(10000000))
    5001450
    sum(round() for _ in xrange(10000000))
    5004434
    sum(round() for _ in xrange(10000000))
    5000238
I also ran a test of 0.6666 vs. 0.625, and... sorry to say, but 0.625 wins hands down.


Supose your facing someone using that strategy. They have a 0.5 chance of a .5+ on their first turn and a .25% chance of 0.5+ on their second turn and a 25% of under 0.5- on their second roll.

Now, you get. 0.500001 on your first turn. There is a 25% chance they got less than 0.5 so you win 25% of the time. Well, if they are randomly under 50% then you have a better than 75% shot of beating them with a reroll. (50% your over 0.5 and thus win and 25% you are under 0.5 and still win.)

But, they are above 0.50 then you gain a 25% chance of beating them (coin flip for over 50% and 50/50 odd or 50% of a loss) your under 50% and have even odds. Thus, you go from 25% win chance to .25 * .75 + 0.75 * 0.25 or 0.375% which is better odds for re rolling.


That was my analysis as well. What the other guy does is unknown and therefore irrelevant. If you can improve your current expectation, you do. You can trivially search over that strategy space and see that, yep, 0.5 is the maximum.

  #include <stdio.h>
  #include <stdlib.h>

  int main(int argc, char *argv[]) {
    for (double d = 0.25; d < .75; d += 0.01) {
      double e = 0;
      for (int j = 0; j < 100000; j++) {
        float r = ((float) random()) / ((float) RAND_MAX);
        if (r < d)
          r = ((float) random()) / ((float) RAND_MAX);
        e += r;
      }
      printf("d %lf e %lf\n", d, e / 100000);
    }
  }


Careful, working in a language without a REPL encourages you to explore less. I was working in Python, and I did this and was about to post it, but then I asked myself "well, it can't be so easy, right? Why don't I also run a two-player simulation and make sure this strategy actually beats 0.6 first?", and - see my sibling post - it surprisingly loses. I can't yet explain why[1], but you can't argue with facts.

[1] probably has to do with the probability distribution having a shape more interesting than a Gaussian bell


> I can't yet explain why

Your comment and simulation prompted me to attempt an explanation of where danielvf's analysis is incorrect, which I've posted here https://news.ycombinator.com/item?id=12941330


The problem with your analysis is that "go for the highest value" is not equivalent to "maximize expected value of the number you end up with"

For example, consider a similar game where you draw and then optionally re-draw (like in the original game) but the amount of money you win if you end up with a number p is p if p < 0.8 and p + b if p > 0.8, where b is some constant. Then you're also trying to "go for the highest value". However it is clear that in the limit of large b, your best strategy is to redraw if p < 0.8.


Consider heads or tails: you get two flips, what are your chances of getting heads at least once? 50% for the first time then theres 50% chance you didn't, of that you now have a 50% chance to still get it .5*.5=.25. so for the two chances p(1)=.5, p(2)=.25, therefore overall p1+p2=.75 for redrawing at <50% or getting heads once with two tries.


You are agreeing with my solution?

Though in this problem it seems the "average" final number would be 0.625, not 0.75 - since half the time you have an 0.75 expected outcome and the other half a 0.5 expected.


I think the point is that your strategy isn't independent of your opponent. Maybe a similar example is nontransitive dice: https://en.wikipedia.org/wiki/Nontransitive_dice

In other words, it isn't sufficient to shoot for the highest score possible.


Absolutely. I was inspired by non transitive dice to create the problem.


No, having 2 chances obviously increases the probability. I don't see what you're averaging, but the formula for probability of an outcome of 2 unrelated events by the general addition rule is p(1 or 2) = (p1 + p2) - (p1*p2). in this case .5+.5-.25=.75; I was just trying to phrase it in an intuitive way.


Imagine being told exactly what number the other guy ended up with. Then, instead of maximizing value you would need to just redraw if you're below that number.

I haven't solved it yet, but it's definitely some kinda recursive formula where you have to assume the other guy is also playing the optimal strategy.


> but it's definitely some kinda recursive formula where you have to assume the other guy is also playing the optimal strategy.

I think it can be solved by searching for a pure strategy nash equilibrium


This sounds like a variant of the two envelope problem: https://en.m.wikipedia.org/wiki/Two_envelopes_problem

So is this one of those questions that's not possible to answer you're just trying to see how they handle the stress of the unsolvable?


But here you know the content of the "envelope" before you decide whether to switch or not.

If the adversary was only allowed one pick I guess I'd only re-draw if I had less than 0.5 since statistically you'd be more likely to have a greater number than a weaker one. I'm not sure what the maths are but a simple implementation gave me an average value of about 0.625 using this strategy, so definitely an improvement over a single pick.

But I have the feeling that since the adversary can chose the same strategy maybe you should be a little more aggressive and redraw if you are at 0.5? But then the adversary would do the same so I guess the solution would eventually converge to a value a little above 0.6?

That'd be my guess at least.


It isn't. There's a clear, well defined answer.


I'm just a programmer, but this seems really easy to me. What am I missing?


See my post. As a mathematician, maybe you're missing some interesting theory of why if the opponent has an average of 0.625 and you got 0.55 you should redraw. However, as a programmer, the first thing you're missing is running a quick Monte Carlo simulation and seeing whether your idea actually wins :-)


If first draw is >= .7 don't redraw since chance of being larger than .7 is 50% for two draws. 50% probability is chosen since if you pick any number lower than .7, the probability increases favoring your opponent drawing a higher number.

calc: p(a union b) = p(a) + p(b) - (p(a)*p(b)); p(>=.7)=3/10; .51 = (3/10) + (3/10) - (.3)(.3)


Why p=.5 for > .7 w/ 2 draws?


I'm no quant but pretty sure to calculate the probability of two random events you use a union function, basically means probability of an event occurring in a or b. https://people.richland.edu/james/lecture/m170/ch05-rul.html


Just curious, can you share your role/organization and the people you typically pass and the people you typically fail in your interviews?


Draw. If > 0.5 keep it. If not draw again.


Agreed. I think the other 'more complex' answers assume there's communication or that you know your opponent's strategy. That's not in the problem statement.


Yeah, this came up the last time murbard2 posted this: https://news.ycombinator.com/item?id=8699033

The idea is to find a Nash equilibrium, but the question is stated in the wrong way, since it gives no indication that you're in a position to make any assumptions about your opponent's strategy.


It's an interview question, you're free to ask for clarifications.

That said, finding the Nash equilibrium is precisely the natural thing to do if you can't make assumptions about your opponent's strategy. It's a worst case scenario analysis. You can't statistically lose out to your opponent if you pick it. This does not mean that you know your opponent strategy.

Consider the context. This is algorithmic trading. If your strategy can be gamed, it will be gamed by someone else. Picking anything but the Nash equilibrium means you can be gamed.

A good candidate will immediately gravitates towards this solution, there is no ambiguity that this is the expected answer.


In the absence of any information about your opponent, you should assume that they're equally likely to be using any strategy.

>You can't statistically lose out to your opponent if you pick it.

Right, but you can miss out on an opportunity to fully exploit the fact that your opponent is using a really bad strategy. So if you want to maximize your probability of winning against an opponent with a randomly chosen strategy, it's not the right answer. Nothing in the statement of the question indicates that you are entitled to make any assumptions about your opponent's strategy. And it's not as if determining the best strategy in a context where you lack perfect knowledge of your opponent's strategy is an unknown concept in game theory:

https://www.economics.utoronto.ca/workingPapers/UT-ECIPA-ECP...

I feel like the way you state the question at present may end up selecting for candidates who think exactly like you do. Why not just add the statement that the players have mutual knowledge of their rationality? That removes the ambiguity, and doesn't make the question fundamentally any easier.


1) There's no reason why you should assume every strategy is equally likely. It's absolutely not the case. When you play poker, do you assume your opponent play their cards randomly just because you don't know their strategy? It's not a realistic prior at all.

2) Playing the Nash equilibrium means that you may not be fully exploiting the opponent, true. It's also possible that you're opponent's strategy is to give you a million dollars if you say "abracadabra". So how about that? Check-mate interviewer.

3) If you want to make the problem harder on yourself, you can be my guest and try and assume a realistic distribution of strategies for your opponent. That distribution will have a peak around the nash equilibrium, so you'll have to end up calculating it.


Your way of looking at it is perfectly sensible. But a sensible person might also say that a uniform prior over strategies is appropriate in the absence of any information about your opponent. Choice of priors is always going to be partly subjective, and reasonable people can make different choices.

I would point out that in a real world scenario, a uniform prior is barely any less realistic than the a prior derived from the assumption of mutual knowledge of perfect rationality. Is your poker opponent equally likely to be doing their utmost to lose as to win? No. But on the other hand, they're not likely to be tracking 100% of available information and making the mathematically correct choice all the time.

In general, the question of what you should assume about an unknown opponent's strategy is a tricky philosophical question, and reasonable people can gravitate to different points of view on this issue. I think that might be why so few people give the answer you want. I was just suggesting that explicitly mentioning mutual knowledge of perfect rationality might help.


Do you think it's equally likely that a random opponent will redraw below 0.01 as it is that they will redraw below 0.5 or below phi?

Most people do not give the right answer not because of philosophical issues around the Nash equilibrium but because they have a hard time understanding that looking to maximize the expected value is not optimal.


>Do you think it's equally likely that a random opponent will redraw below 0.01 as it is that they will redraw below 0.5 or below phi?

If random really means random, then that's presumably true by definition. If we're talking about an opponent randomly selected from a representative group of people, then no, but equally, I wouldn't assume that they're going to pick the golden ratio as their threshold either.

The thing is that maximizing the expected value is optimal in the absence of any information about the opponent's strategy. But sure, I can easily believe that many people would get the question wrong anyway.


> If random really means random, then that's presumably true by definition.

No it's not. Randomness is always with respect to a given distribution. There is nothing special about the uniform distribution over a threshold. The space of strategies also include strategies where your opponent redraw only when they hit a number between 0.1 and 0.2, or where they redraw if the second decimal digit of the first draw is even.

> The thing is that maximizing the expected value is optimal in the absence of any information about the opponent's strategy.

Your statement is meaningless without a prior on the strategy. Under which prior is maximizing the expected value optimal? If you assume the other party randomly picks a threshold between 0 and 1, then the optimal answer is (sqrt(39)-3)/6 not 0.5. Not that this prior makes any sense. It's far more sensible to assume a strategic opponent.

Rethrowing below 0.5 is typically the optimal strategy when the opponent has no strategy at all, i.e. always or never rethrows.


If a strategy is any function from the preceding game state to keep/redraw, then I don't have the mathematical chops to specify a uniform prior over all such strategies and compute the optimal strategy that way. (In fact it may not necessarily be possible to do this in a sensible way, as Bertrand's paradox suggests, as another poster pointed out.)

However, there is a much simpler argument suggesting that maximizing EV would be the right approach in the absence of any knowledge about the opponent's strategy. Say the opponent chooses to redraw at random. Then in effect they are drawing once. If we redraw when the value is < a, then we just want to maximize the probability of winning, (1-a)(a + 0.5) + 0.5a, giving a = 0.5.

Modeling a process that we know nothing about as a random process seems reasonable. And the reasoning above does not actually involve computing EVs, though it happens that the right answer is the answer that maximizes the EV.

To justify this a bit further, we can reason as follows. If we know nothing about the opponent's strategy, then their initial draw tells us nothing about whether or not they will choose to redraw. So our estimate of their probability of redrawing, p, must be independent of their initial draw. Then the probability of winning is 0.5ap + (1-a)(1-p)(a + 0.5) + (1-a)p(a + 0.5) + 0.5a(1-p), which is invariant with respect to p, and which is at its maximum when a = 0.5.

>It's far more sensible to assume a strategic opponent.

Well, this is going round in circles, but I would say that this is sensible only if you know something about the opponent! I don't see how it's sensible to assume anything about your opponent if you know nothing about them. And if we're talking about the "real world", then of course you do know something, but you certainly don't know that they're perfectly rational.


Alternative option: spend time on other things because quants are about the big bucks


Quants are actually all about winning 0.0001 cent on average on every iteration of a game played 1 billion times a day.


I know, but it was not defined in the spec


Re-draw if the number is smaller than 2/3.


To those who want to know more about quant jobs,

Mark Joshi has a cheat sheet intro to the profession: http://www.markjoshi.com/downloads/advice.pdf

Joshi co-authored the standard reference for quant interview preps called Quant Job Interview. Interview questions with solutions. It's available on Amazon.

If you just wanna try brain teasers, google something like "ext:pdf quant interview". There are many of them.


I'm partway through the ML class on Coursera and have only recently rekindled my interest in Math. However, my background is in Accounting (undergrad and did a full years as an accountant), and Software Engineering (masters in SE and full time now for some years). Is there a path to actually move into a Quant Job given a software background, to someone who's mathematically inclined? My fear is that I load up on extra "free" math classes, teach myself through MOOC's and self study, and won't have the required degree or connections to break into the field. Is there a recommended path (or is it even possible) to make the major switch from Web Dev to Quant?


Standard path as of maybe half a decade ago was to do a masters or phd in financial engineering (sometimes alternatively named computational finance).

I personally believe it's hard to self-study since there's many graduate-level math prerequisites that are required if you don't want to just be a programmer.

For example stochastic calculus which is the area of math around black scholes option pricing requires graduate level probability (i.e., one taught with measure theory which is itself is a graduate level class after analysis).

I do know of some people who tried to put together their own curriculum though: https://www.quantstart.com/articles/How-to-Learn-Advanced-Ma....


It's definitely possible with a masters in SE. There are many types of quant (see the markjoshi.com link above) - it will be easiest for you to become a quant developer with your background in software engineering. This will be much more focused on the software development side of things rather than the math/modelling side of things, but depending on the firm will give you large exposure to how the modelling side works. They are very competitive positions, but if you are a strong programmer that can prove you have knowledge of financial markets, have relevant side projects, and can understand and communicate the math behind what you build, you will have a leg up on the competition.


Thanks. You're reply gave me a boost of inspiration. I'm working now to learn more of the Math and modeling side of things, but I know it's going to be a _long_ hard road. Thanks for the link, that site seems to have some great info - I particularly like this http://www.markjoshi.com/downloads/advice.pdf.

I'm curious what sort of side projects I could try that others could take notice on.


Sigh...I guess I'll be sticking with web dev.


If you can come up with a profitable strategy then you can still be hired or just trade yourself. Without this there is no point being a quant. Besides quants still need developers - those integrals will not help building maintainable trading apps.

I would say those questions are for recent graduates who want a junior quant role.


I wouldn't be discouraged. I just had a look and parts 1 - 4 are simple maths, and everyone should know those maths anyway. Then part 5 is application of simple arithmetics (except questions 8 and 12) to options, which means if you already know arithmetics you only need to learn what an option is.

What I mean is that you should be learning basics maths anyway, and when you know a bit you can solve all those questions.

What a lot of people could benefit from is learning how to learn.

EDIT: Thanks for downvotes guys. Should I also downvote when someone says "don't be discouraged with distributed systems, or asynchronous programming" etc? I don't know those things, so maybe I should downvote those people?


You are getting downvoted because "everyone should know those maths anyway" is dismissive (I think you meant "everyone [who is applying for a quant position] should know those maths anyway," which is more fair)


No, I did mean everyone. Or everyone who cares about understanding the world. Or rather, everyone who cares about understanding the world they themselves should be wanting to learn more maths than just sums and multiplications.

Anyway, thanks for the feedback.


| everyone who cares about understanding the world.

Those fluent in math really need to drop this egocentric view. If you can't do matrix manipulations you aren't curious about the world? Give me a break. The breadth of topics that yield understanding of various parts of our world is vast. Do you have an early university understanding of history, politics, math, sociology, psychology, economics, law, capitalism, etc. etc. etc.


If you don't grok matrix manipulations (which are a very simple thing to understand: matrices are linear transformations and matrix multiplication is composition of linear transformations; for some reason this is only taught as a theorem, not as the first lesson in Linear Algebra), you're missing on a really great tool that helps us a lot in understanding almost every kind of large scale process precisely. Without this tool, we'd be back to the 18th century in math, physics, engineering, computer science...

It's pretty high up there in "list of things to learn if you're curious about the world". Of course, so are history, economics, etc'


I didn't say you shouldn't learn those things, did I?


I guess some of these could be considered basic math, maybe if you associate only with physicists and statisticians. The calculus / related rate problems are straightforward, but most folks will have had no familiarity with Brownian motion, matricies, and most other topics here. Hell, there is little point for > 99% of software developers to think about or use arctangent in any given year.


I have a feeling you have a different definition of "simple maths" than most people.


yea the reality is that the vast majority of people know little to no calculus or linear algebra


I even kinda enjoy it, but the last time I had to do any kind of separable partial differentials was... a decade ago? There's very few domains where it's commonplace.


Hello, I'm hoping for a suggestion for some math courses that will help me understand these questions.. I only took algebra in high school and didn't go to college. Since, I've looked at a couple things (basic proofs, lambda calculus) but quickly was overwhelmed. What is a good starting point for someone who can reverse and exploit all sorts of binaries but does not know what a derivative is? Just start on Coursera? My other question is - what can I achieve with this knowledge? What use is it? What problems can I solve? I think one thing that kept me progressing learning code was I could see what I was looking to achieve and I reached it. I learned it because I wanted to understand it. When I look at math, I see something that I will never be able to understand... There is a part of me that has a strong desire to learn it but the other half tells me to leave it alone. Is it understandable in the same way programming is understandable? By learning the syntax?


Part 1 is geometry, calculus and linear algebra. An advanced high-school education in mathematics would let you answer these questions.

Parts 2 and 3 are probability and statistics, but they are the kind of advanced statistics that is typically only taught to math students. Second and third year university courses in probability and statistics would let you answer these questions.

Part 4 is statistical mechanics, I think. That's a third-year physics course, and one of the more mathematically demanding ones, at that.

Part 5 is specific to finance. The course you are looking for is probably called Mathematical Finance.

The conventional route to learning all this stuff is an undergraduate degree in mathematics, with an emphasis on statistics. Now, getting there is a long road, but you have to start somewhere. You might possibly be able to learn it all on your own, but it's easy to get stuck and hard to get unstuck without a teacher to help you.

If you are in the US, start by taking a couple of classes in a community college. In your case, the course you are looking for is probably called Pre-Calculus or maybe Algebra II. If things go well you can continue from there.

As for what use this is, apparently you can get a good job as a quantitative financial analyst if you know this stuff.


Thanks a lot for the reply, really appreciated.


I do not see any answers for the questions, do you mind giving them to me so I can check mine?



Wow - wish I could upvote this more than once, it's a sad day when someone suggests that some knowledge is within reach and they're downvoted because what they suggested is "too hard"


And I'll be sticking with business dev. Major respect to those who crush these things.


Surely there are areas between web dev and quant that you can do and are interesting.


I feel like solving a lot of these questions are just a matter of either domain knowledge, or not being intimidated by unfamiliar vocabulary.

I'd be willing to bet that the vast majority of people who got through a college CS degree would be able to pass this after a couple months of self-study.

(On that note, anyone want to chime in on what the job options would look like if someone had a CS degree and could pass this?)


I graduated ~3 years ago with a CS degree and while I probably could have done most of these questions from sections 1-4 at that time, now all this looks very daunting. Just figuring out how to setup the differential equation for Q1 took a long time and I could, almost physically, feel my thought processes struggling.

It's just such a terrible feeling, knowing at one point in the past you would have been more speedy with something and noticing what time has done to your skills.

Do any of you have any strategies for not loosing knowledge? Would re-reading textbooks every couple months be a solution? How do I decide if it's even worth - I don't necessarily _need_ this type of knowledge for what I'm doing now but loosing skills you know you once had feels awful.


Perhaps useful if you are interviewing, this is what I give to my undergraduate students.

Mathematical Finance Cheat Sheet:

https://github.com/daleroberts/math-finance-cheat-sheet


Thanks sharing this. I actually see some equation needed for my last midterm lol


Didn't expect to see a link up to my graduate alma mater (Kent State U. mathematics department).

I know Oana, if there are any questions about the problems or the course sequence that they fit into let me know.


"Quant" = Quantitative Analyst?


Yes


Quantum physicist, nah jk


Next time my wife starts looking at jobs for me and says "hey this quant thing looks like it pays a lot, why don't you apply?" I'll just show her this.


Is there an answer key with detailed explanations to this?


Questions 8 and 9 remind me of a math YouTube video I saw recently about "weird" infinite series that can sometimes be solved quite easily.

IIRC Consider sqrt(x+sqrt(x +... = x

You need to check the series converges (this can sometimes be tricky but seems the questions tell you they converge). Then you can do this substitution for the infinite piece:

sqrt(x+x) = x

x^2 = 2x

x^2 - 2x = 0

...

y = 1/(1+y)

[IIRC there are situations where a different substitution yields a different result - I don't recall the details]


This is the video: "Ramanujan's infinite root and its crazy cousins" https://www.youtube.com/watch?v=leFep9yt3JY&t=687s


According to this advice https://www.quora.com/Is-stochastic-calculus-helpful-to-get-... these interview questions are not the most important thing


For those who speak french : https://www.amazon.fr/101-quizz-banquent-Math%C3%A9matiques-... by Pr. Gilles Pagès (LPMA).


Unrelatedly Gilles Pagès is an awesome and humble prof


Energy trading quant here. If anyone interviewed me with this crap I'd walk out immediately.


Care to elaborate?

I guess those Ramanujan-like infinite series are a bit like trivia questions, right? Does anyone really have an intuition to solve something like that?


It's just pattern matching. How is this pattern formally defined (for the first problem)? x = sqrt(x + sqrt(x+sqrt(x...))) is informal, so the real definition is a recursion x = sqrt(x + THING), and THING = sqrt(x + THING), which is convenient, because that's just x again. So x = sqrt(x + x) = sqrt(2*x). Oftentimes with mathy stuff (but not always), if you have trouble, right out definitions for the confusing parts and work with that instead, recursing until it makes sense.


Yeah. Those questions seem to be "hey have you seen this cool trick before?" I used to be excited about the American Math Competition back in high school before I realized that memorizing things like the Sophie Germain Identity, the Cauchy-Schwarz inequality, or the Chinese Remainder Theorem would get you through 99% of the problems on the exam. It doesn't test original thinking — it tests if you're "in the know".


Absolutely. Just heading out now, but I'll write out a response for you later on. In short, Being a quant is about either measuring risk, or making money trading. This doesn't test either.


Those are just simple tricks. the first one is 2, the second one, I haven't worked out yet, but it's x where 1/x = 1+x. Just transform those into something with the original in it (square the first, invert the second), then replace the infinite series with the variable you were looking for since you know they'll be equal.


As a trading quant starting this Friday, are there any good forums on the net for reading up the soft aspects?


What type of trading? Wilmott forums are okay. Any specific questions?


Pretty sure 3.1 doesn't need knowledge of Markov Chains (even if you implicitly use them)


Yeah, looks like I would need a good statistics and linear algebra refresher to have a chance at it

Options pricing is Black-Scholes equation and related equations


what is the background required for quants these days? I have a Master's in math but no formal training in econmics or finance. Also I graduated 10 years ago


Quickest way to look qualified at least for derivative pricing is go get an MFE.


I'm having flashbacks to my qualifiers. Yeah!


I worked at a hedge fund for 4 years, and these math questions are irrelevant to market structure.

Based on the challenge of most of these questions, I would hazard a guess and say only very recent grads would be able complete these. I'm 6 years out of Princeton, but I've forgotten how to do probably 90% of these questions.

Parts 3/4/5 require some pretty specific rules that are more rote memory than anything else.


Got an EE degree and a BSc in Math but 3.0 GPA - still possible to be a quant? Will they chuck out my resume or will I still get an interview.


It doesn't matter if you can't pass the interview.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: