Back to ratificationism?
When we face a decision and work out what we should do, we gain information about what we will do. Taking into account this information can in turn affect what we should do. Here's an example.
(I) In front of you are two opaque boxes, one black one white. You can open one of them and keep whatever is inside. Yesterday, a perfect (or almost perfect) predictor tried to predict what you would choose. If she predicted that you'd take the black box, she put a million dollars in the white box and two dollars in the black box. If she predicted that you'd take the white box, she put a thousand dollars in the black box and one dollar in the white box. Which box do you open?
Let's say that at the beginning of your deliberation, you are completely undecided, giving 50 percent credence to the hypothesis that you'll end up opening the black box. Standard formulations of causal decision theory then say that opening the white box has greater expected payoff: since there's a 50 percent probability that it contains a million, the expected payoff is 500000.50, which is a lot more than what you could possibly find in the black box. However, choosing to open the white box would provide you with highly relevant information: it would reveal that the predictor has (almost certainly) put only one dollar in the white box and a thousand in the black box. As a rational decision-maker you should take that information into account. Many putative "counterexamples" to causal decision theory, such as those in Richter 1985 and Egan 2007, are based on this observation.
In the early 1980s, a popular response was to add a "ratifiability constraint" to causal decision theory. An option is ratifiable if it maximizes expected utility on the supposition that it is chosen. Opening the white box is not a ratifiable option, for on the supposition that it is chosen, opening the black box has greater expected utility. The new decision rule might now say that an agent should choose an option with maximal expected utility among the ratifiable options. Alternatively, it might say that the agent should choose from the ratifiable options an option with maximal self-conditional expected utility, that is, with maximal expected utility conditional on being chosen.
One problem with these rules is that they often fall silent in the trouble cases they are meant to address. In example (I), neither option is ratifiable. So the proposed decision rules can't tell you what to do. This looks bad (although it may be worth keeping in mind that all extant formulations of decision theory fall silent on some decision problems -- notably when the expected utility goes undefined).
A more serious problem with ratificationism was pointed out by Brian Skyrms (1984, pp.84-86): there are decision problems in which it would clearly be irrational to go for a ratifiable option. The point can be illustrated with a variation of example (I).
(II) Everything is as in case (I), except that you have a further option of opening neither box. In that case you get 1 cent. If the predictor foresaw that you would open neither box, she left both boxes empty.
Now taking neither box is ratifiable, and it's the only ratifiable option. But it is clearly not the uniquely rational choice. If you open one of the boxes, you can be sure to get at least a dollar.
Skyrms himself offered an elegant alternative (most fully developed in Skyrms 1990). He supplements causal decision theory with a detailed account of how an agent's beliefs should change in the process of deliberation. Return to example (I), where you begin your deliberation in a state of complete uncertainty about what you will do. Relative to this state, opening the white box has greater expected utility. Having figured that out, you should increase your credence in the hypothesis that you'll open the white box. But then your beliefs have changed and you should re-compute the expected utilities, which may again require updating your beliefs, and so on. If this iterative process is modeled as a continuous process, it always lead to an equilibrium -- a state in which your beliefs no longer change. In easy decision problems, the equilibrium is a point where you've decided on one of your options. Not so in examples (I) and (II). Here almost all initial states lead to a mixed equilibrium in which you are unsure whether you'll open the white box or the black box.
I like this picture. It is well motivated -- at least in broad outline -- and it almost always yields satisfactory verdicts. But I do have a few reservations.
Some concern the postulated dynamics. The model requires that when you discover that opening the white box has greater expected utility, you become very slightly more confident that you'll choose the white box. Why, exactly, is this an epistemically rational response to your discovery? Moreover, how exactly does it work? Skyrms tentatively models the update as a kind of Jeffrey conditioning, but this can lead to implausible results (for example, in the deliberation outlined in section 6 of my paper on the absentminded driver). So how, in general, should the update go?
Another potential worry is that Skyrms's model can lead to suboptimal outcomes in decision problems with multiple non-equal equilibria. In example (II), there are two equilibria (ignoring equilibria with point-sized areas of attraction): opening neither box, and being undecided between the black box and the white box (but more inclined towards the white box). If you started off the deliberation in a state in which you're very confident that you'll take neither box, Skyrms's dynamics takes you to a state where you're certain you take neither box. Admittedly, this is an implausible starting point to enter the deliberation, but it is easy to come up with cases where, say, starting with indifference between all options leads to a non-optimal equilibrium.
To be honest, my intuitions are not very strong when I think about such cases. But it's certainly true that deliberators who employ a look-and-leap model that would directly take them to the optimal equilibrium generally do better than deliberators who employ Skyrms's continuous model. (On the other hand, the belief change in Skyrms's dynamics is driven by evidence, while the look-and-leap dynamics seems to involve an entirely different, non-evidential update.)
Another feature of Skyrms's model that some find objectionable is that it doesn't always select a particular option. In our examples, the deliberation leads to a state of indecision in which the agent remains unsure what to do.
I don't find this objectionable. On the contrary, I think indecision is exactly the right attitude for a decision-maker in examples (I) and (II).
To illustrate, consider what a decision-maker who follows Evidential Decision Theory will do in example (I). She will reason as follows: "if I open the white box, I almost certainly get one dollar; if I open the black box, I almost certainly get two; so I'll open the black box. Easy." But let's not ignore the fact that decisions can provide information. As the decision-maker prepares to reach for the black box, she is almost certain that the white box contains a million dollars, while the black box contains only two. Moreover, her belief is justified, and it is true. She knows that the white box contains a lot more than the black box. She also knows that her choice has no influence on what's in the boxes. She knows that if she were to open the white box she would get a million. She knows that a better-informed adviser who has inspected the content of the boxes would tell her to open the white box. Yet she goes ahead and opens the black box!
This certainly won't persuade evidentialists who are convinced that one-boxing is the right choice in Newcomb's problem. Example (I) is not a new counterexample to Evidential Decision Theory. All I'm saying is that deciding to open the black box in example (I) looks just as irrational as one-boxing in Newcomb's problem. If we think that Evidential Decision Theory gives the wrong recommendation in Newcomb's problem (as I think it does), we should also think that it gives the wrong recommendation in example (I).
Of course, deciding to open the white box would also be irrational. If the agent decides to open the white box, she knows that she would have gotten a lot more by choosing the black box.
So what should a rational agent do, if she can neither decide to open the white box nor to open the black box? Well, what she ought to do is remain undecided. An adequate decision theory should accept that rational deliberation does not always lead to states of decision, in which one or more acts are chosen as rational. Sometimes the only rational end point of deliberation is a state of indecision.
So far, I have assumed that the options whose expected utility a decision-maker considers are ordinary acts like opening the white box and opening the black box. However, it has been argued that what actually matters is not the expected utility of these acts, but the expected utility of our possible decisions.
A variety of arguments in support of this claim can be found in Weirich 1983, Sobel 1983 and Hedden 2012. Some of these arguments strike me as fairly convincing. I won't repeat them all, but here is one relevant consideration.
Suppose you face a decision between staying in Damascus and going to Aleppo. At the moment, you are inclined to stay in Damascus. If you decide to go to Aleppo, you'll have to hire a horse; you should also pack some clothes, and inform your friends. When you evaluate your options, these things must be taken into account. But there is no guarantee that they all take place in the epistemically or subjunctively closest worlds in which you go to Aleppo. Moreover, there's a chance that even if you decide to go, you won't actually reach Aleppo. This too should be taken into account, although the closest words in which you go to Aleppo are trivially worlds in which you reach Aleppo. So when you evaluate your options, you shouldn't consider the expected utility of going to Aleppo, but of deciding to go to Aleppo: you should look at the closest worlds in which you decide to go to Aleppo, not at the closest worlds in which you actually go there.
So let's assume that when we face a decision then strictly speaking our options are not ordinary acts, but special mental states of intention or decision. These states, rather than acts, are the direct outcomes of deliberation, and the goal of deliberation is to select an optimal outcome.
But hold on. I just said that the optimal outcome of a deliberation isn't always a decision. Sometimes it is a state of indecision. In any case, if we take states of decision to be our options, it is natural to also include states on indecision or states of partial decision, where some acts are ruled out and others left open.
Precise states of indecision can be represented (at least approximately) as probabilistic mixtures of decisions. So if the options space contains all possible states of indecision, then it is effectively closed under mixing.
Now return to example (I). When we identified your options with overt acts, decision theory couldn't tell you which option to choose. You had to remain undecided. But now being undecided is itself an option. The Skyrmsian deliberation equilibrium in the original setup now corresponds to a (mixed) option. This option is an equilibrium in the Skyrmsian dynamics on the new option space. So now decision theory does, after all, tell you which option to choose.
We still need a story about how decision-makers should take into account the information provided by their own choice. But in the new picture, the ratificationist story suddenly looks quite attractive -- more so, perhaps, than the Skyrmsian story.
Recall the two problems I mentioned for ratificationism. The first was that it often made decision theory go silent, where we would like it give advice. This problem is gone. In cases like example (I), there is a unique ratifiable option -- a state of indecision.
The other problem was that ratificationism recommends irrational choices in cases like example (II). That problem is also gone. There are two ratifiable options in example (II): opening neither box and being undecided between the white box and the black box. The second has both higher self-conditional expected utility, and higher unconditional expected utility relative to any sensible credence at the start of deliberation.
Moreover, ratificationism provides a look-and-leap alternative to Skyrms's dynamics that seems to overcome the problems with Skyrms's account. In particular, the rule to choose a ratifiable option with highest self-conditional expected utility always selects an optimal Skyrmsian equilibrium.
The result looks so attractive that I'm inclined to see it as a further argument for identifying options with states of decision and indecision -- independent of the arguments by Weirich, Sobel, Hedden and others.
It is well-known that decision theory and game theory get a lot easier if the space of options is closed under mixing. Like most philosophers, I used to think that this is hardly a reason to conclude that our options are in fact always closed under mixing. The options, I thought, are what they are. In some decision situations, the agent has the possibility to choose mixture (to "randomize"), in others she doesn't. The fact that the latter situations are easier doesn't make the former go away. But now I'm tempted to think that the design specification for ideal decision-makers entails that mixed options are always available.
I'm not entirely convinced. A common move to focus on pure options is to stipulate that randomization would be punished. Similarly, one might ask what the agent in example (I) should do if states of indecision are severely punished. More simply, what if the agent is "risk-averse" and doesn't like leaving things open?