Inadmissible games and counterfactuals
A time traveler offers you a game. You can toss a fair coin. If it lands heads, you win $2; if it lands tails, you lose $1. The time traveler informs you that all fair coins tossed today will land tails. (He knows, because he's seen all the results before traveling back in time.) Do you play?
Suppose you decide to toss. Trusting the time traveler, you can then be confident that you will lose $1. You would not have lost anything if you hadn't tossed, so the alternative option would have been better. It seems that you've made the wrong decision.
But suppose you decide not to toss. Then what would have happened, counterfactually, if you had tossed? The time traveler didn't tell you about counterfactual futures, only about the actual one. (He doesn't have 'middle knowledge'.) So in the counterfactual situation where you toss, you might just as well win $2 as you might lose $1. That is, if you decided not to toss, the alternative option would have been better -- its expected utility would have been $0.5. Again, it seems that you've made the wrong decision.
Here I was appealing to a counterfactual principle of ratifiability:
Rat 1: choosing an option is not rational if on the assumption that it is chosen a different option would have had a higher expected utility, if chosen.
We can also think of the case along the lines of the decision theories proposed by Gibbard & Harper (1978), Sobel (1980) and Joyce (1999), where the utility of an option A is calculated as the sum of P(C\A)V(AC) over all outcomes C, with P(C\A) = the probability with which C would come about if A were to be chosen. (Note the backslash.)
Suppose again you are confident that you won't toss the coin. Then P($2\toss) = P($-1\toss) = 1/2, since you don't have inadmissible information about the counterfactual toss future. The utility of tossing is therefore $2/2 + $-1/2 = $1/2, while the utility of not tossing is $0. You should toss. On the other hand, the more confident you are that you will toss, the more the utility of tossing approaches $-2. You should do whatever you think you won't actually do.
With utilities defined in this way, either choice violates the ratifiability constraint proposed by Harper (1984):
Rat 2: choosing an option is not rational if on the assumption that it is chosen a different option has higher expected utility.
Jeffrey (1965) and Lewis (1981) calculate utilities in a different way that recommends rejecting the time-traveler's game no matter what. Jeffrey calculates the utility of A as the sum of P(C/A)V(AC), with P(C/A) being ordinary conditional probabilities. For Lewis, the utility of A is the sum of P(K)V(AK) over all dependency hypotheses K, specifying for each option what it would bring about with what objective chance. Since there is (we can assume) only one K with non-negligible probability -- saying that tossing will lead to $2 and $-1 with objective chance 1/2 each, and not tossing will lead to $0 -- this reduces to V(A), which equals Jeffrey's sum of P(C/A)V(AC).
Upshot? Depends on what you think about the case.
If you think the rational option is clearly not to toss the coin, we learn that counterfactuals are sometimes not the right way to think about the available options. If you want a causal decision theory, you better side with something like Lewis's rather than Gibbard and Harper's or Sobel's or Joyce's. (You should also reject Rat 1 in favour of Rat 2, if you like ratifiability constraints.)
On the other hand, if you think neither playing nor not playing is rational, or that what you should do depends on your beliefs about what you will do, then we have a new argument against both Jeffrey's evidential decision theory and against Lewis's version of causal decision theory.
Since the truth of counterfactuals is determined by similarity to actuality, I'm not sure it's possible for someone to tell you about actuality and not tell you about counterfactual situations.