Why ain'cha rich? (EDC, ch.7, part 1)
Chapter 7 of Evidence, Decision and Causality looks at arguments for one-boxing or two-boxing in Newcomb's Problem. It's a long and rich chapter. I'll take it in two or three chunks. In this post, I will look at the main argument for one-boxing – the only argument Arif discusses at any length.
The argument is that one-boxing has a foreseeably better return than two-boxing. If you one-box, you can expect to get $1,000,000. If you two-box, you can expect $1000. In repeated iterations of Newcomb's Problem, most one-boxers end up rich and most two-boxers (comparatively) poor.
Two-boxers have a standard response to this line of thought. Yes, they say, one-boxers tend to do better than two-boxers. But not because they make the better choice. Any one-boxer who got a million was given a choice between $1,000,000 and $1,001,000. (They were presented with two boxes, one of which contained a million and the other a thousand. Taking both boxes would have given them $1,001,000, taking one box gave them $1,000,000.) It's no great achievement that they left with a million. By comparison, all the two-boxers who only got a thousand were given a choice between a thousand and nothing. It's not their fault that they didn't get more than a thousand.
Arif considers this response. He agrees that two-boxing is best given the situation in which the agent finds herself at the time of choice. However much is in the opaque box, you would always get more by taking both boxes than by taking only the opaque box. But this, Arif says, "looks beside the point" (p.185).
To me, it doesn't look beside the point at all. Practical rationality is all about making the best of the hands you were dealt.
Or so I think. Arif disagrees.
"Causalists just intuit that judgements about how things would be if you were to act otherwise are relevant to practical rationality, even to the point (as here) where they outweigh overwhelming empirical evidence […] that paying attention to these things consistently does worse than ignoring them." (p.186f.)
The evidence Arif talks about comes from imaginary repetitions of Newcomb's Problem. Does it indicate that paying attention to how things would be if you were to act otherwise puts you at a disadvantage? In one sense, yes. Those who pay attention to these things are offered a choice between $1000 and $0, while those who ignore them are offered a better choice. If you know that you will encounter Newcomb's Problem and you could decide whether you pay attention to the counterfactuals – before the prediction is made – you should decide not to do it. Everyone agrees about that. But Newcomb's Problem doesn't involve any such choice. The only choice you face occurs long after the prediction was made. If "causalists" and "evidenitalists" are given the same choices, the causalists always do better: they end up with $1000 more than the evidenitalists.
Evidentialists just intuit that judgements about how things will be if you do act otherwise are relevant to practical rationality, even to the point that they forego a guaranteed extra $1000.
I admit that the statistical argument for one-boxing has some intuitive pull. All the one-boxers are rich, all the two-boxers are poor. Isn't it up to you which group you belong to?
To reveal the fallacy in the argument, it helps to look at analogous arguments with an obviously false conclusion.
"Everyone who flies First Class is rich, most people who fly Economy are comparatively poor. It's up to you! You should choose First Class!".
In this example, no extant decision theory recommends choosing First Class (unless there are other reasons to do so). There are also examples in which agents who follow EDT statistically do worse than agents who follow other decision rules that have been proposed. Arntzenius (2008) mentions two such cases.
One is a version of Newcomb's Problem in which both boxes are transparent. Here both CDT and EDT say that you should take both boxes. "Functional Decision Theory" says you should take only one box (see Yudkowsky and Soares (2017) and my review). Agents who follow FDT generally find $1,000,000 in one box and $1000 in the other, while CDTers and EDTers find nothing in the first box and $1000 in the second.
Arntzenius's second example is a case in which CDTers outperform EDTers.
You can bet on either the Yankees or the Red Sox. The Yankees win 90% of the time, the Red Sox 10%. If you bet on the Yankees you get $1 if they win and lose $2 if they lose. If you bet on the Red Sox you get $2 if they win and lose $1 if they lose. A perfect predictor tells you whether your bet will be a winning bet or a losing bet.
No matter what the predictor tells you, EDT says you should bet on the Red Sox. CDT says you should bet on the Yankees. In the long run, you do much better if you follow CDT.
If you feel the pull of the statistical argument for one-boxing, what do you say about these examples? Don't they show that there are situations in which EDT recommends an option that is foreseeably worse?
Arif says that they do not. When we talk about foreseeable outcomes, he explains, we should consider what is foreseeable on the basis of the agent's evidence. In the variant of Newcomb's Problem with two transparent boxes, you know what's in the two boxes. If you don't see a million, then getting a million is not a foreseeable outcome, no matter what you do. Similarly in the Yankees case. Suppose you're told that you will win. (The case for losing is parallel.) Given this information, the expected return of betting on the Red Sox is actually greater. It is foreseeable that you will win.
Fair enough. In the proposed technical sense of "foreseeable" these really aren't cases in which EDTers "foreseeably" do worse. But why is that the relevant sense by which we should evaluate a decision rule? In effect, Arif assumes that we should evaluate decision rules by considering their expected payoff at the time of decision, where the expectation is computed in terms of standard conditional probabilities. That's just another way of saying that we should evaluate decision rules by considering whether they conform to EDT. So understood, the "Why ain'cha rich?" argument for EDT has EDT as a premise.
The original statistical argument didn't seem to have such a premise. In repetitions of Newcomb's Problem, we can see that the one-boxers do better. Isn't that evidence that one-boxing "works"?
The problem is that this kind of argument overgeneralises. If the Yankee case is repeated, we can see that those who follow CDT consistently do better than those who follow EDT. Isn't this "overwhelming empirical evidence" that EDT doesn't "work"?
Let's take a step back. "Why ain'cha rich?" arguments point out that agents who follow one decision rule do better than agents who follow another rule. But this is only relevant if the agents all face the same choice.
Imagine I'm hosting a dinner for decision theorists. Those who have a record of endorsing CDT can choose from a 5-star menu. Those who have a record of endorsing EDT get a choice between rotten rat meat and stale bread. The CDTers end up with better food. Obviously this is no evidence that CDT is the right theory of practical rationality.
Now consider the variant of Newcomb's problem with two transparent boxes. You see a million in one box and $1000 in the other, I see nothing in one box and $1000 in the other. Do we face the same choice? Arguably not. There is, of course, an abstract description of a decision problem that covers both of our cases – a description that doesn't mention what we see in the boxes, just as there is a neutral description of the dinner choices that doesn't mention what's on the menu. But this sense of "same choice" doesn't seem relevant to evaluating the rationality of our choices. The fact that you end up with a million doesn't tell us that you made a better choice.
This seems obvious to me. But it is not a neutral verdict. "Functional Decision Theory" is motivated precisely by the idea that we should treat the two situations as relevantly the same. If we then compile statistics over how much people get when they find themselves in this "type" of situation, FDT agents statistically outperform both CDTers and EDTers. I'm not sure how, in general, friends of FDT think we should individuate decision situations.
Arif thinks the only relevant statistics are statistics that individuate decision situations by the agent's information at the time of choice. If we compile statistics on this basis, one-boxers outperform two-boxers in Newcomb's Problem.
I think the only relevant statistics are statistics that individuate decision situations by holding fixed all (relevant) facts that are outside the agent's control. In Newcomb's Problem, this means holding fixed the content of the opaque box. If you get a choice between a million and a thousand, and I get a choice between a thousand and nothing, then we don't face the same choice. If we compile statistics about Newcomb's Problem in which agents face the same choice in my preferred sense, two-boxers outperform one-boxers.
In footnote 43 on p.194, Arif mentions that two-boxers fare better in infinite repetitions of Newcomb's Problem in which the content of the opaque box is held fixed. He complains that no "non-question-begging justification is available for taking that sub-population to be of significance".
I agree. I don't have a non-question-begging justification for thinking that this is the relevant sub-population. But I'm not alone. Arif doesn't have a non-question-begging justification either for his preferred method of individuating decision situations. Nor do friends of FDT have a non-question-begging justification for their method, whatever it is.
In the debate between decision rules, "Why ain'cha rich?" arguments always beg the question.