Is 'can' information-sensitive in an interesting way, like 'ought'?
An example of uninteresting information-sensitivity is (1):
(1) If you can lift this backpack, then you can also lift that bag.
Informally speaking, the if-clause takes wide scope in (1). The
truth-value of the consequent 'you can lift that bag' varies from
world to world, and the if-clause directs us to evaluate the statement
at worlds where the antecedent is true.
Many accounts of deontic modals that have been developed in response
to the miners puzzle have a flaw that I think hasn't been pointed out
yet: they falsely predict that you ought to rescue all the miners.
The miners puzzle goes as follows.
Ten miners are trapped in a shaft and threatened by
rising water. You don't know whether the miners are in shaft A or
in shaft B. You can block the water from entering one shaft, but you
can't block both. If you block the correct shaft, all ten will
survive. If you block the wrong shaft, all of them will die. If you
do nothing, one miner will die.
Let's assume that the right choice in your state of uncertainty is to
do nothing. In that sense, then, (1) is true.
There's something odd about how people usually discuss iterated
prisoner dilemmas (and other such games).
Let's say you and I each have two options: "cooperate" and
"defect". If we both cooperate, we get $10 each; if we both defect, we
get $5 each; if only one of us cooperates, the cooperator gets $0 and
the defector $15.
This game might be called a monetary prisoner dilemma, because
it has the structure of a prisoner dilemma if utility is measured by
monetary payoff. But that's not how utility is usually
understood.
Suppose you prefer $105 today to $100 tomorrow. You also prefer $105 in 11 days to $100 in 10 days. During the next 10
days, your basic preferences don't change, so that at the end of that
period (on day 10), you still prefer $105 now (on day 10) to $100 the
next day. Your future self then disagrees with your earlier self about
whether it's better to get $105 on day 10 or $100 on day 11.
In economics jargon, your preferences are called time
inconsistent. Time inconsistency is supposed to be a failure of
ideal rationality.
In the last four months I wrote a draft of a possible textbook on
decision theory. Here it is.
I've used these notes as basis for my honours/MSc course "Belief,
Desire, and Rational Choice". They're tailored to my usage, but they
might be useful to others as well.
The main difference to other textbooks is that I talk at length
about the structure and interpretation of subjective probabilities and
utilities. In part, this is because it makes a great difference to the
plausibility of the expected utility norm whether, say, utilities are
defined in terms of individual welfare, in terms of choice
dispositions, or taken as primitive. But I also think these are
independently interesting philosophical topics.
The decision-theoretic concept of preference is linked to the concepts
of subjective probability and utility by the expected utility
principle:
(EUP) A rational agent prefers X to Y iff the expected
utility of X exceeds the expected utility of Y.
Economists usually take preference to be the more basic concept and
interpret the EUP as an implicit definition of the agent's utilities
(and sometimes also her probabilities).
According to a popular picture, some beliefs are justified by "seemings": under
certain conditions, if it seems to you that P, then you are justified
to believe that P, without the assistance of other beliefs. So
seemings provide a kind of foundation for belief, albeit a fallible
kind of foundation.
But most of our beliefs are not justified by seemings (or by
beliefs which are justified by seemings, etc.). I once learned that
Luanda is the capital of Angola and I've retained this belief for many
years, although I rarely think about Angola and thus rarely experience
any relevant seemings that could justify the belief.
Friends of primitive powers and dispositions often contrast their
view with an alternative view, usually attributed to Lewis, on which
modal facts about powers, dispositions, laws, counterfactuals etc. are
grounded in facts about other possible worlds. But Lewis never held
that alternative view – nor did anyone else, as far as I
know. The allegedly mainstream alternative is entirely made of
straw. The real alternative that should be addressed is the
reductionist view that powers and dispositions are reducible to
ultimately non-modal elements of the actual world.
In his "Dicing
with Death" (2014), Arif Ahmed presents the following scenario as
a counterexample to causal decision theory (CDT):
You are thinking about going to Aleppo or staying in
Damascus. Death has predicted where you will be and is waiting for
you there. For a small fee, you can delegate your choice to a coin
toss the outcome of which Death can't predict.
Tossing the coin promises to reduce the chance of death from about
1 to 1/2. Nonetheless, CDT seems to suggest that you shouldn't toss
the coin. To illustrate, suppose you are currently completely
undecided and thus give equal credence to Death being in Aleppo and to
Death being in Damascus. Then you're 50 percent confident that if you
were to stay in Damascus, you would survive; similarly for
going to Aleppo. You're also 50 percent confident that you would
survive if you were to toss the coin, but in that case you'd have
to pay the small fee. So it's not worth paying.
Bob's favourite piano piece is Beethoven's Moonlight Sonata. Alice
would like to play Bob's favourite piece, and she can play the
Moonlight Sonata, but she doesn't know that it is Bob favourite piece,
nor can she find out that it is. Can Alice play Bob's favourite
piano piece?
In one sense yes, in another no. It's a kind of de re/de dicto
ambiguity. Alice can play what is in fact Bob's favourite piece, but
she can't play it "under that description", loosely speaking.