People seem to be thinking "if the predictor is not 100% accurate then I don't have to worry about the predictor, since it has already made its decision and now I should just get as much as I can". But if it's 100% accurate, your decision implies (if not causes) how much money you'll get, so shouldn't you make the decision that implies the most money?
It doesn't feel right for there to be one argument that applies below 100% and another at exactly 100%.
> It doesn't feel right for there to be one argument that applies below 100% and another at exactly 100%.
Exactly. The reason to choose one box holds whether the predictor is 90%, 99%, or 100% accurate. I think lowering the accuracy of the predictor makes two boxing look more appealing though.
The more I think about the problem the more fiendish it is and the more I understand why it's a paradox.
I think regardless of whether you agree with the author's conclusions about what is correct strategy, they're correct the paradox arises in that it pits the optimality of two different strategies against one another, where the strategies depend on the causal processes involved. I think the paradox raises a lot of questions about prediction versus causality and free will, and it's fair to ask (as another poster suggested) whether or not the machine in the paradox can ever exist.
The problem is that in the case where the machine is 100% accurate, it is still not causing anything. So although it's fair to conclude in that scenario that optimal strategy is the 1-box decision, it doesn't make sense from a causal perspective because you shouldn't have any causal agency over the decision the machine made.
> it's fair to ask (as another poster suggested) whether or not the machine in the paradox can ever exist.
Here's something interesting to think about: assuming the machine could exist in some possible world, how does causality work in that world?
To be certain about your future behavior, the machine would need a "time oracle" that allowed it to view the future. The machine consults its time oracle, it sees you making a box choice, and it puts money in the boxes based on the choice you made.
But this is literally you having causal agency over the machine's decision, isn't it? After all, it acted because of what it saw you do.
Now, imagine that the machine has the same time oracle but it's randomly unreliable and gives the machine the wrong idea 10% of the time. Then that expected utility calculation starts looking pretty good, doesn't it? Because 90% of the time your actions causally determine the machine's.
The accuracy of the predictor does not impact the two-boxer's argument from causal independence, which I think is a serious problem for the two-boxer.