Hacker News new | past | comments | ask | show | jobs | submit login

Draw. If > 0.5 keep it. If not draw again.



Agreed. I think the other 'more complex' answers assume there's communication or that you know your opponent's strategy. That's not in the problem statement.


Yeah, this came up the last time murbard2 posted this: https://news.ycombinator.com/item?id=8699033

The idea is to find a Nash equilibrium, but the question is stated in the wrong way, since it gives no indication that you're in a position to make any assumptions about your opponent's strategy.


It's an interview question, you're free to ask for clarifications.

That said, finding the Nash equilibrium is precisely the natural thing to do if you can't make assumptions about your opponent's strategy. It's a worst case scenario analysis. You can't statistically lose out to your opponent if you pick it. This does not mean that you know your opponent strategy.

Consider the context. This is algorithmic trading. If your strategy can be gamed, it will be gamed by someone else. Picking anything but the Nash equilibrium means you can be gamed.

A good candidate will immediately gravitates towards this solution, there is no ambiguity that this is the expected answer.


In the absence of any information about your opponent, you should assume that they're equally likely to be using any strategy.

>You can't statistically lose out to your opponent if you pick it.

Right, but you can miss out on an opportunity to fully exploit the fact that your opponent is using a really bad strategy. So if you want to maximize your probability of winning against an opponent with a randomly chosen strategy, it's not the right answer. Nothing in the statement of the question indicates that you are entitled to make any assumptions about your opponent's strategy. And it's not as if determining the best strategy in a context where you lack perfect knowledge of your opponent's strategy is an unknown concept in game theory:

https://www.economics.utoronto.ca/workingPapers/UT-ECIPA-ECP...

I feel like the way you state the question at present may end up selecting for candidates who think exactly like you do. Why not just add the statement that the players have mutual knowledge of their rationality? That removes the ambiguity, and doesn't make the question fundamentally any easier.


1) There's no reason why you should assume every strategy is equally likely. It's absolutely not the case. When you play poker, do you assume your opponent play their cards randomly just because you don't know their strategy? It's not a realistic prior at all.

2) Playing the Nash equilibrium means that you may not be fully exploiting the opponent, true. It's also possible that you're opponent's strategy is to give you a million dollars if you say "abracadabra". So how about that? Check-mate interviewer.

3) If you want to make the problem harder on yourself, you can be my guest and try and assume a realistic distribution of strategies for your opponent. That distribution will have a peak around the nash equilibrium, so you'll have to end up calculating it.


Your way of looking at it is perfectly sensible. But a sensible person might also say that a uniform prior over strategies is appropriate in the absence of any information about your opponent. Choice of priors is always going to be partly subjective, and reasonable people can make different choices.

I would point out that in a real world scenario, a uniform prior is barely any less realistic than the a prior derived from the assumption of mutual knowledge of perfect rationality. Is your poker opponent equally likely to be doing their utmost to lose as to win? No. But on the other hand, they're not likely to be tracking 100% of available information and making the mathematically correct choice all the time.

In general, the question of what you should assume about an unknown opponent's strategy is a tricky philosophical question, and reasonable people can gravitate to different points of view on this issue. I think that might be why so few people give the answer you want. I was just suggesting that explicitly mentioning mutual knowledge of perfect rationality might help.


Do you think it's equally likely that a random opponent will redraw below 0.01 as it is that they will redraw below 0.5 or below phi?

Most people do not give the right answer not because of philosophical issues around the Nash equilibrium but because they have a hard time understanding that looking to maximize the expected value is not optimal.


>Do you think it's equally likely that a random opponent will redraw below 0.01 as it is that they will redraw below 0.5 or below phi?

If random really means random, then that's presumably true by definition. If we're talking about an opponent randomly selected from a representative group of people, then no, but equally, I wouldn't assume that they're going to pick the golden ratio as their threshold either.

The thing is that maximizing the expected value is optimal in the absence of any information about the opponent's strategy. But sure, I can easily believe that many people would get the question wrong anyway.


> If random really means random, then that's presumably true by definition.

No it's not. Randomness is always with respect to a given distribution. There is nothing special about the uniform distribution over a threshold. The space of strategies also include strategies where your opponent redraw only when they hit a number between 0.1 and 0.2, or where they redraw if the second decimal digit of the first draw is even.

> The thing is that maximizing the expected value is optimal in the absence of any information about the opponent's strategy.

Your statement is meaningless without a prior on the strategy. Under which prior is maximizing the expected value optimal? If you assume the other party randomly picks a threshold between 0 and 1, then the optimal answer is (sqrt(39)-3)/6 not 0.5. Not that this prior makes any sense. It's far more sensible to assume a strategic opponent.

Rethrowing below 0.5 is typically the optimal strategy when the opponent has no strategy at all, i.e. always or never rethrows.


If a strategy is any function from the preceding game state to keep/redraw, then I don't have the mathematical chops to specify a uniform prior over all such strategies and compute the optimal strategy that way. (In fact it may not necessarily be possible to do this in a sensible way, as Bertrand's paradox suggests, as another poster pointed out.)

However, there is a much simpler argument suggesting that maximizing EV would be the right approach in the absence of any knowledge about the opponent's strategy. Say the opponent chooses to redraw at random. Then in effect they are drawing once. If we redraw when the value is < a, then we just want to maximize the probability of winning, (1-a)(a + 0.5) + 0.5a, giving a = 0.5.

Modeling a process that we know nothing about as a random process seems reasonable. And the reasoning above does not actually involve computing EVs, though it happens that the right answer is the answer that maximizes the EV.

To justify this a bit further, we can reason as follows. If we know nothing about the opponent's strategy, then their initial draw tells us nothing about whether or not they will choose to redraw. So our estimate of their probability of redrawing, p, must be independent of their initial draw. Then the probability of winning is 0.5ap + (1-a)(1-p)(a + 0.5) + (1-a)p(a + 0.5) + 0.5a(1-p), which is invariant with respect to p, and which is at its maximum when a = 0.5.

>It's far more sensible to assume a strategic opponent.

Well, this is going round in circles, but I would say that this is sensible only if you know something about the opponent! I don't see how it's sensible to assume anything about your opponent if you know nothing about them. And if we're talking about the "real world", then of course you do know something, but you certainly don't know that they're perfectly rational.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: