>That generalization is no longer a paradox; it's just a situation. The paradox is about choice theory and you have eliminated any element of choice.
You're still choosing whether to use a decision procedure that results in how many boxes to take when offered this choice, which then determines whether you get this offer at all.
And I don't know what you're trying to say with the paradox/situation distinction; "Newcomb's problem with transparent boxes" is a paradox and a situation, just like the original: how are people ending up better off by "leaving money on the table"? (whatever that would mean)
>People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.
But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid. If the people "leaving money on the table" have more money, then "I don't want to be right", as the saying goes.
>In my opinion, the resolution of the paradox is that's an impossible situation.
I disagree. At the very least, you can play as omega against an algorithm, with varying degrees of scrutability. How should that kind of algorithm be written so that it gets more money (in transparent boxes, how to get omegas to offer you filled boxes in the first place)? Your answer would require addressing the same issues that arise here for humans in that situation.
There are also statistical versions of the paradox, like merchants vs shoplifters. Obviously, they aren't perfect predictors, but they do well enough for the sort of "acausal" effects in the paradox to happen, ie people not shoplifting, even when they could get away with it. Here are some more real life examples:
To be sure, people aren't predictable enough now to get the kind of scenario described in the problem. But they are predictable enough for the uncomfortable implications: even an accuracy slightly better than chance gets you situations were one-boxing is statistically superior.
(I do agree that in pracitce, whenever you see this kind of situation, you should assume there's some trick until overwhelming evidence comes in to the contrary.)
> But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid.
In this case (which I have to imagine is deliberate on the part of Nozick or Newcomb), "leaving money on the table" means literally leaving money on the table. Taking one box always, always results in less money than the total amount of money available in the boxes to people who take both boxes. (Of course, the evidence to date is that people who choose both boxes always have less money available to them in the first place.)
But the equally justifiable decision-making method is to perform the action that has yielded the best observed results in the past for others, despite there being no way that one's actions now can possibly have affected the past (choice or determinism doesn't matter).
The nature-of-the-predictor stuff is just irrelevant nonsense in either approach to the problem, which is a happy coincidence because it is, in fact, irrelevant and impossible nonsense. :)
Edit: "there's no way that one's actions now can possibly have affected the past" is given in the original problem. Wikipedia's article quotes it as "what you actually decide to do is not part of the explanation of why he made the prediction he made."
You're still choosing whether to use a decision procedure that results in how many boxes to take when offered this choice, which then determines whether you get this offer at all.
And I don't know what you're trying to say with the paradox/situation distinction; "Newcomb's problem with transparent boxes" is a paradox and a situation, just like the original: how are people ending up better off by "leaving money on the table"? (whatever that would mean)
>People who choose (or act, if you don't care for free will) to take only one box are always leaving money on the table, full stop. The point of the game is to maximize winnings.
But once you pin down what "leaving money on the table" means, it's not at all clear that the concept coincides with something you want to avoid. If the people "leaving money on the table" have more money, then "I don't want to be right", as the saying goes.
>In my opinion, the resolution of the paradox is that's an impossible situation.
I disagree. At the very least, you can play as omega against an algorithm, with varying degrees of scrutability. How should that kind of algorithm be written so that it gets more money (in transparent boxes, how to get omegas to offer you filled boxes in the first place)? Your answer would require addressing the same issues that arise here for humans in that situation.
There are also statistical versions of the paradox, like merchants vs shoplifters. Obviously, they aren't perfect predictors, but they do well enough for the sort of "acausal" effects in the paradox to happen, ie people not shoplifting, even when they could get away with it. Here are some more real life examples:
http://lesswrong.com/lw/4yn/realworld_newcomblike_problems/
To be sure, people aren't predictable enough now to get the kind of scenario described in the problem. But they are predictable enough for the uncomfortable implications: even an accuracy slightly better than chance gets you situations were one-boxing is statistically superior.
(I do agree that in pracitce, whenever you see this kind of situation, you should assume there's some trick until overwhelming evidence comes in to the contrary.)