Sobel's strictly causal decision theory
In Jordan Howard Sobel's papers on decision theory, he generally defines the (causal) expected utility of an act in terms of a special conditional that he calls "causal" or "practical". Concretely, he suggests that
where 'A □→ B' is the special conditional that is true iff either (i) B is the case and would remain the case if A were the case, or (ii) B is not the case but would be the case as a causal consequence of A if A were the case (see e.g. Sobel (1986), pp.152f., or Sobel (1989), pp.175f.).
(Sobel actually thinks this definition should be generalised to allow for cases in which it is a matter of chance what would happen if A were the case, but for what follows we can ignore this complication.)
In contrast to most other definitions of causal expected utility, Sobel's definition seems to explicitly restrict attention to causal consequences of the relevant act. So here we have the kind of "strictly causal" decision theory that is the target of Ahmed (2014).
I don't think such a theory is tenable, although it might appear to help with cases where a certain outcome is counterfactually, but not causally, associated with an act.
For example, suppose we follow Lewis (1979) and hold fixed the past but not the laws when evaluating counterfactual events in deterministic worlds. If A is a counterfactual act and L are the laws, then 'if A were the case then ¬L' is true on Lewis's account, but ¬L would not be true as a causal consequence of A (as emphasised in Lewis (1981)). So A □→ ¬L is false, because Sobel's condition (ii) is not met.
Let's briefly consider a variant of Sobel's proposal on which we sum not over worlds but over outcomes:
Dorr (2016) considers a scenario in which a person called Frank has devoted their life to defending the claim that the deterministic system L is true (as indeed it is). Let M be the proposition that Frank has devoted his life to defending a false claim, and let A be some act of which Frank is fairly sure he won't choose it – say, donating $1,000 to charity. Assume Frank assigns very low utility to M. Let 'A > B' be the ordinary (non-backtracking) counterfactual 'if A were the case then B would be the case', without Sobel's added stipulation that B would be the case as a causal consequence of A (or because it is the case anyway). Standard counterfactual-based decision theory as in Gibbard and Harper (1978) defines
Assuming the Lewisian standards for evaluating counterfactuals, we now get the implausible result that Frank should not donate $1,000 to charity, even if he would like to do so, because if he did then he would have devoted his life to a mistake: Cr(A > M) is high.
Our version of Sobel's theory gets around the problem. While Cr(A > M) is high, Cr(A □→ M) is low. If Frank were to donate the money, he would have devoted his life to a mistake, but this would not be a causal consequence of the donation.
Dorr's own response is to stick with (3) but reject the Lewisian standards for evaluating the counterfactual 'A > O'. According to Dorr, what counterfactually depends on our actions in normal deterministic worlds are not the laws but the past. This leads to an analogous (although admittedly more far-fetched) problem if we imagine that Frank has devoted his life to defending a particular (highly specific) hypothesis P about the past, of which he is convinced that it is true. On Dorr's account, Cr(A > M) is high, and so EU(A) is low. Again, the Sobel-style theory avoids the problem: since Frank's present choice doesn't causally affect the past, Cr(A □→ M) is low.
Unfortunately, the Sobel-type theory doesn't help with other, closely related problems. Consider a variant of the "Betting on the Laws" case from Ahmed (2013). Frank, who knows that L is the true system of laws, is asked to either affirm or deny L. Tim will come to believe whatever Frank says. Frank's aim is that Tim has an accurate belief about whether or not L is true. He plans to affirm L. What is the expected utility of denying L? Well, if Frank chose to deny L then – by the Lewisian standards – L would be false, and Tim would come to have the true belief that L is false. Moreover, Tim would have this belief as a causal consequence of Frank's utterance. So (Deny L □→ True Belief) is true. On the Sobel-type theory (2), just like on the Gibbard-Harper theory (3), Frank should deny L. (If we use Dorr's standards instead of Lewis's, replace L with P.)
The fact that I had to introduce Tim highlights an arguably more serious problem with (2). The definition only considers outcomes that are causal consequences of the relevant act. Suppose Frank intrinsically cares about speaking the truth. Since whether or not he speaks the truth is not a causal consequence of any utterance, (2) wrongly predicts that Frank's desire is irrelevant to what he should say.
Anyway, let's return to Sobel's actual definition (1), which doesn't consider outcomes but entire worlds. What does this say about the tricky scenarios involving Frank?
Consider again the case where Frank knows that the deterministic system L is true, and where he is about to affirm L. Let w be the world that would come about if Frank were to deny L. (We assume for simplicity that there is a unique such world, so that we can stick with the simple version of Sobel's definition.) Now (1) asks us to consider whether the entire world w is a causal consequence of Frank's denial (at w).
It's a strange question. Clearly, much of what happens at w is not a consequence of Frank's denial – for example, everything that happened before the denial. I would say that it's not true that the whole world w is a causal consequence of the denial.
If that is right, then A □→ w is true only if w is the actual world and A actually takes place. Conversely, if w is actual and A takes place, then A □→ w is true by clause (i) of Sobel's definition (assuming strong centring). It follows that Cr(A □→ w) = Cr(A ∧ w). And so (1) reduces to
This happens to yield the intuitively correct verdict in our deterministic problems. When Frank is asked to affirm or deny L, for example, the credence-weighted average of the utility of the 'Affirm L' worlds is greater than that of the 'Deny L' worlds.
In general, however, (4) is surely crazy. It implies that every act that is certain not to be chosen has expected utility 0. Also, suppose you have a choice between $0.50 and $1.00, and your utility is measured by monetary payoff. If Cr($1.00) = 0.1 then ∑w V(w)Cr($1.00 ∧ w) = 0.1 while ∑w V(w)Cr($0.50 ∧ w) = 0.4. (4) says you should take the $0.50.
(4) is so crazy that it can't be what Sobel had in mind. He must have thought that entire counterfactual worlds are often causal consequences of ordinary counterfactual acts.
I don't have a clear grasp on the relevant notion of causal consequence. But never mind. If the causal restriction on □→ (that if B is actually false then A □→ B is true only if B would be a causal consequence of A) is not redundant, then Sobel's theory still faces a version of the problems created by (4).
Suppose the causal restriction is not redundant, so that there are cases where A > w is true while A □→ w is false. Now compare Sobel's definition (1) with the standard counterfactual-based definition (3), but with individual worlds in place of outcomes:
In the context of (3), we assume that A > B is a Stalnaker conditional, so that (3) and (5) are equivalent and ∑w Cr(A > w) = 1. Sobel's A □→ w evidently entails A > w. If the converse entailment fails because A > w is true at some world v while A □→ w is false, and if such a world v has positive credence, then ∑w Cr(A □→ w) is less than 1. (5) includes the weighted value of v in EU(A), but (1) does not.
That's a problem (and not just because the so-called "expected utility" is not really an expectation in the technical sense). If the weights add up to less than 1 then the expected utility of the relevant act is wrongly skewed towards 0.
For example, consider an agent who only cares about doing A, so that all A worlds have value 1 while all non-A worlds have value 0. Then surely the expected utility of doing A should be 1. But if ∑w Cr(A □→ w) is less than 1 then EU(A) will be less than 1 as well.
If we look at outcomes rather than worlds, as in (2), it is clear that A > O can be true while A → O is false (assuming we don't subscribe to localism). For example, if the agent assigns intrinsic value or disvalue to A itself, so that A is one of the outcomes O, then A > O is trivially true but A □→ O is false. I already complained above that in this kind of case (2) gets things wrong. In addition, (2) faces the problem of the weights not adding up to 1: ∑O Cr(A □→ O) may not equal 1 even if the outcomes form a partition.
In sum, Sobel's "strictly causal" account of expected utility faces a dilemma. Either there are worlds w for which A > w is true while A □→ w is false or there are none. If there are none, the causal restriction on the conditional is redundant and Sobel's theory reduces to the more familiar theory of Stalnaker and Gibbard and Harper, in which there is no special restriction on outcomes being causal consequences of the relevant acts. If, on the other hand, there are worlds w for which A > w is true while A □→ w is false, then ∑w Cr(A □→ w) is less than 1 and Sobel's definition makes the expected utility of A implausibly skewed towards 0.
(On an exegetical note, I suspect that Sobel did not realise that the causal condition in his analysis of 'A □→ B' amounts to a strengthening of the counterfactual conditional. Rather, he thought that there is a reading of the ordinary English conditional that builds in the causal condition. He mentions the condition in order to clarify that it is this sort of conditional that figures in (1). On reflection, it should be clear that English counterfactuals don't have such a reading. The reading would render 'A > A' false in worlds without causal loops and it would invalidate weakening the consequent. Nonetheless, in the 1970s and 1980s many people seemed to have thought that there is such a reading. Gibbard and Harper (1978), for example, suggest (on p.167) that their (3) is equivalent to (2), given that 'B is a causal consequence of A' can be analysed as '(A > B) and for some alternative A* to A, ¬(A* > B)'. This analysis is attributed to Sobel (1971), where I can't find it. The analysis wrongly predicts that every act is a causal consequence of itself, and that the conjunction of the past and the laws is a causal consequence of every act in a deterministic world.)
Fascinating post Professor Schwarz. Thank you for sharing. We usually encounter J.H. Sobel's work in the context of the Philosophy of Religion, where he was known as one of the most formidable defenders of Atheism. Therefore it's nice to see some of his characteristic rigor and methodology in other areas of philosophy. We're curious about your thoughts regarding his magnum opus "Logic and Theism" which he is generally most known for?