Here is an attempt at an argument against formulating causal decision theory in
terms of counterfactuals (loosely following up on the discussion in the previous
post). The point seems rather obvious, so it is probably old. Does anyone know?
Suppose you would like to go for a walk, but only if it's not
raining. Unfortunately, it is raining heavily, so you have
almost decided to stay inside. Then you remember Gibbard and
Harper's paper "Counterfactuals and two kinds of expected
utility".
Let [] and <> express alethic necessity and alethic possibility, let @ stand for
'actually', and L for 'it is unalterable that'. We are going to prove that
if something happens, then it is unalterable that it happens.
We need the following principles:
- A <-> <>@A.
Something is the case iff it is possibly actually the case.
- <>A -> L<>A.
If something is alethically possible, one cannot make it
alethically impossible.
- L(A -> B) -> (LA -> LB).
If A -> B and A are both unalterable, then so is B.
- If A is provable then LA.
Logical truths are unalterable.
Here is the proof, with a sea battle for illustration.
Alvin Goldman has just been giving this year's summer school here
in Cologne. When he put forward his view that what distinguishes good
ways of belief formation from other ways is their truth-conduciveness, I
found myself disagreeing and claiming that there is no general principle that
distinguishes the good ways from others. This is somewhat surprising
given that I've often claimed in recent times that the only epistemic
criterion for evaluating belief-formation is truth-conduciveness. Here
is how I think the two claims can go together.
In the old days, it was common to exclude individual constants from
quantified modal logic in favour of Russellian descriptions. I can see
how this works if we have either fixed domains (the same individuals
populating all worlds) or possibilist quantifiers. But in such systems
individual constants don't cause much trouble anyway. Can one also make
the description move in more liberal systems? I don't see how, but I guess
I'm just missing something obvious.
Consider a formula "possibly, a is F". We want to replace the name "a" by a description "the A".
Does the description get narrow scope ("possibly, the A is F") or wide scope ("the A is
possibly F")? Either way, we seem to get the wrong result.
There is a mistake on page 49 of Lewis's "Counterfactual dependence
and time's arrow" (1979). Since the mistake seems to be repeated all the
time, it might be worth pointing it out.
Page 49 is where Lewis lists similarity standards for his analysis
of counterfactuals. The analysis, recall, says that "if A were the
case, then C" is true iff the closest A-worlds are C-worlds (or, more
precisely, iff either there are no A-worlds or some A&C-worlds are
closer to the actual world than any A&~C world). Closeness is a matter
of similarity, and Lewis indicates what the relevant respects of
similarity might be for certain ordinary counterfactuals in section
3.3 of his 1973 book, and again in the 1979 article on counterfactual
dependence. Roughly, the closest A-worlds are those that perfectly
match the actual world across as much of spacetime as possible without
diverse and widespread violations of the actual laws. This won't do
for indeterministic worlds, where generally no laws need to be
violated at all in order to ensure perfect match of futures even after
earlier divergence. So Lewis restricts his standards to deterministic
worlds, returning to the indeterministic case in the 1986 postscript
to the 1979 paper.
I'm back in Germany. Nice and rainy here. Blogging will also resume at some point or other.
I'm off to the Blue Mountains for a week. In lieu of philosophical
content, here is a rant on semantic contents and hyperintensions that
I wrote last year.
When philosophers talk about meanings (or contents, or semantic
values), they rarely explain what these things are meant to do -- what
constraints an adequate theory of meaning would have to meet. Trying
to figure out those constraints from what is implicitly used in
discussions and arguments, one gets a laundry list of miscellaneous
features with hardly any theoretical unity. Meanings are supposed to
determine (together with syntactic structure) the truth-value of
sentences; they are supposed to be known by competent speakers; they
are supposed to be conventionally associated with symbols and sounds;
they are supposed to track what a sentence is (intuitively) about, and
also in which possible worlds it is (intuitively?) true; they are
supposed to be part of a model of how our brain processes and
generates words; they are supposed to be possible objects of beliefs
and desires; they are supposed to play various roles in speech act
theory; they are supposed to the referents of 'that' clauses; they are
supposed be such that one can truly utter 'Fred said that P' if and
sonly if Fred uttered a sentence whose meaning is the same as the
meaning of 'P'. And so on and on.
The "something even bigger" that I mentioned when I made the online papers feed public has finally arrived: philpapers.org.
Sometime later this year I will move to Cologne (Germany) as part of a recently approved Emmy Noether project on apriority and understanding. The other parts of the project so far are Brendan Balcerak Jackson and Magdalena Balcerak Jackson, but we're looking for PhD students. If you might be interested, here are the details.
Unrelatedly, I made some changes to the blog. Let me know if anything's broken.
I forgot to mention that my book on Lewis has been released a couple of weeks ago. It's a distant descendant of my PhD thesis, and in German.