Should you rescue the miners?
Many accounts of deontic modals that have been developed in response to the miners puzzle have a flaw that I think hasn't been pointed out yet: they falsely predict that you ought to rescue all the miners.
The miners puzzle goes as follows.
Ten miners are trapped in a shaft and threatened by rising water. You don't know whether the miners are in shaft A or in shaft B. You can block the water from entering one shaft, but you can't block both. If you block the correct shaft, all ten will survive. If you block the wrong shaft, all of them will die. If you do nothing, one miner will die.
Let's assume that the right choice in your state of uncertainty is to do nothing. In that sense, then, (1) is true.
(1) You ought to block neither shaft.
But arguably, there is also an "objective" sense in which (1) is false. Objectively, you ought to block whichever shaft the miners are in. Even in the more salient subjective sense, the judgement (1) is sensitive to further information, as the intuitive falsity of (2) illustrates.
(2) If the miners are in shaft A, then you ought to block neither shaft.
The lesson is that 'ought' (and 'should' and 'may') is information-sensitive. But how?
I won't review all proposals that have been made. For concreteness, here's a sketch of a fairly natural idea.
We start with two ingredients, reminiscent of Kratzer's theory of modals.
First, we assume that, relative to any world w and agent S, there's a range B of possible worlds that are in some sense accessible from w – intuitively, the worlds S could bring about at w. B is Kratzer's "modal base".
Second, we assume that, relative to w and S, there is an assignment V of objective normative value to the worlds in B: some are best, some worst, some are intermediate. I won't assume that the ranking is determined by an "ordering source" (a la Kratzer), since we want meaningful cardinal values, but that won't be important.
Now we want to say that relative to an information state I that leaves open where the miners are, you should do nothing. The challenge is that in some accessible worlds compatible with I, you block the shaft in which the miners are stuck. And these are plausibly the best worlds.
To capture the information-sensitivity of oughts, we might assume that B, V, and I together determine another ranking of the accessible worlds. For example, we might assume that relative to I, the I-relative value of a world w' equals the I-expected V-value of whatever act the subject performs in w'.
In any case, we can now say that
'S ought to X' is true at w relative to I just in case S Xs at all accessible worlds that are best relative to I.
This seems to give the right verdicts in the miner puzzle. Relative to the objective facts, you ought to block shaft A (say). Relative to your information, you ought to block neither shaft. Relative to your information updated by the assumption that the miners are in shaft A (this is what the if-clause does), you ought to block shaft A.
Note that the present account relies on an unexplained individuation of "the act" an agent performs at a given world. Consider a world where the miners are in shaft A and you block shaft A. You are then making true a lot of propositions. Among others, you are (a) blocking shaft A, and you are (b) saving all the miners. But (a) and (b) don't have the same expected value relative to your state of ignorance. More generally, if (b) is an act you perform at any accessible world, then it will probably be the best of all available acts, even relative to your state of ignorance. So if (b) counts as an act you perform at some accessible world, then the present account falsely implies that you (subjectively) ought to save all the miners.
OK, that problem is not too hard to fix. Let's assume that, relative to any w and S, there is a set of transparently choosable acts A. Transparent choosability is meant to rule out acts like saving all the miners. To a first approximation, an act is transparently choosable (at w for S) iff the agent (S at w) can be rationally certain that she is performing the act simply by deciding to perform it.
The most specific transparently choosable acts (i.e. the set of acts A such that A is transparently choosable and no logically stronger act A' is transparently choosable) plausibly form a partition of the accessible world. So S performs exactly one of these acts in every accessible world. This act, I suggest, is used to compute the information-relative ranking of the worlds.
But a harder problem remains.
On the present account (as well as many others in the literature), (3) still comes out true on the objective reading of 'ought', and (4) on the more salient subjective reading.
(3) You ought to save all the miners.
(4) If the miners are in shaft A, then you ought to save all the miners.
But these aren't true, I think. What's true is that, if the miners are in shaft A, then you should block shaft A. You would thereby save all the miners. But this, I think, is not something you ought to do.
Take another example, due to Erik Carlson (in a different context). You're standing in front of a safe, and you could prevent some tragedy by opening the safe. The combination is 448-961-5237, but you don't know that. Now consider (5) and (6).
(5) You ought to open the safe.
(6) If the combination is 448-961-5237, then you ought to open the safe.
Carlson intuits that (5) is not true on the objective reading of 'ought'. I agree. And I don't think (6) is true on the information-sensitive "subjective" reading. (Perhaps it is true on a certain de re reading which I'll set aside.)
The problem, in case it isn't obvious, is that you don't know how to open the safe. You can't be obligated to do something if you have no idea how.
So 'S ought to X' implies or presupposes that X itself is transparently choosable. (Roughly speaking. I suspect it's a little more complicated.)
Transparent choosability depends on the subject's information. If you doubt that you can block shaft A, then blocking shaft A is not transparently choosable. One might have expected that when deontic modals are evaluated relative to an information state different from the subject's, then transparent choosability is evaluated relative to that other state. But it is not. It is always evaluated relative to the subject's own information.
So the above account, for example, needs to be patched roughly like so:
'S ought to X' is true at w relative to I just in case X is transparently choosable for S at w and S Xs at all accessible worlds that are best relative to I.