Why maximize expected utility? One supporting consideration that is occasionally mentioned (although rarely spelled out or properly discussed) is that maximizing expected utility tends to produce desirable results in the long run. More specifically, the claim is something like this:
(*) If you always maximize expected utility, then over time you're likely to maximize actual utility.
Since "utility" is (by definition) something you'd rather have more of than less, (*) does look like a decent consideration in favour of maximizing expected utility. But is (*) true?
According to realist structuralism, mathematics is the study of
structures. Structures are understood to be special kinds of complex
properties that can be instantiated by particulars together with
relations between these particulars. For example, the field of complex
numbers is assumed to be instantiated by any suitably large collection
of particulars in combination with four operations that satisfy certain
logical constraints. (The four operations correspond to addition,
subtraction, multiplication, and division.)
A might counterfactual is a statement of the form 'if so-and-so were
the case then such-and-such might be the case'. I used to think that
there are different kinds of might counterfactuals: that sometimes
the 'might' takes scope over the entire conditional, and other times
it does not.
For example, suppose we have an indeterministic coin that we don't
toss. In this context, I'd say (1) is true and (2) is false.
(1) If I had tossed the coin it might have landed heads.
(2) If I had tossed the coin it would have landed heads.
These intuitions are controversial. But if they are correct, then the
might counterfactual (1) can't express that the corresponding would
counterfactual is epistemically possible. For we know that the would
counterfactual is false. That is, the 'might' here doesn't scope over
the conditional. Rather, the might counterfactual (1) seems to express
the dual of the would counterfactual (2), as Lewis suggested in
Counterfactuals: 'if A then might B' seems to be equivalent to
'not: if A then would not-B'.
I stumbled across a few interesting free books in the last few days.
1. Tony Roy has a 1051 page introduction
to logic on his homepage, which slowly and evenly proceeds from
formalising ordinary-language arguments all the way to proving
Gödel's second incompleteness theorem. All entirely mainstream
and classical, but it looks nicely presented, with lots of exercises.
2. Ariel Rubinstein has made his six
books available online (in exchange for some personal
information): Bargaining and Markets, A Course in Game
Theory, Modeling Bounded Rationality, Lecture Notes in
Microeconomics, Economic Fables, and the intriguing
Economics and Language, which applies tools from economics to
the study of meaning.
Is 'can' information-sensitive in an interesting way, like 'ought'?
An example of uninteresting information-sensitivity is (1):
(1) If you can lift this backpack, then you can also lift that bag.
Informally speaking, the if-clause takes wide scope in (1). The
truth-value of the consequent 'you can lift that bag' varies from
world to world, and the if-clause directs us to evaluate the statement
at worlds where the antecedent is true.
Many accounts of deontic modals that have been developed in response
to the miners puzzle have a flaw that I think hasn't been pointed out
yet: they falsely predict that you ought to rescue all the miners.
The miners puzzle goes as follows.
Ten miners are trapped in a shaft and threatened by
rising water. You don't know whether the miners are in shaft A or
in shaft B. You can block the water from entering one shaft, but you
can't block both. If you block the correct shaft, all ten will
survive. If you block the wrong shaft, all of them will die. If you
do nothing, one miner will die.
Let's assume that the right choice in your state of uncertainty is to
do nothing. In that sense, then, (1) is true.
There's something odd about how people usually discuss iterated
prisoner dilemmas (and other such games).
Let's say you and I each have two options: "cooperate" and
"defect". If we both cooperate, we get $10 each; if we both defect, we
get $5 each; if only one of us cooperates, the cooperator gets $0 and
the defector $15.
This game might be called a monetary prisoner dilemma, because
it has the structure of a prisoner dilemma if utility is measured by
monetary payoff. But that's not how utility is usually
understood.
Suppose you prefer $105 today to $100 tomorrow. You also prefer $105 in 11 days to $100 in 10 days. During the next 10
days, your basic preferences don't change, so that at the end of that
period (on day 10), you still prefer $105 now (on day 10) to $100 the
next day. Your future self then disagrees with your earlier self about
whether it's better to get $105 on day 10 or $100 on day 11.
In economics jargon, your preferences are called time
inconsistent. Time inconsistency is supposed to be a failure of
ideal rationality.
In the last four months I wrote a draft of a possible textbook on
decision theory. Here it is.
I've used these notes as basis for my honours/MSc course "Belief,
Desire, and Rational Choice". They're tailored to my usage, but they
might be useful to others as well.
The main difference to other textbooks is that I talk at length
about the structure and interpretation of subjective probabilities and
utilities. In part, this is because it makes a great difference to the
plausibility of the expected utility norm whether, say, utilities are
defined in terms of individual welfare, in terms of choice
dispositions, or taken as primitive. But I also think these are
independently interesting philosophical topics.
The decision-theoretic concept of preference is linked to the concepts
of subjective probability and utility by the expected utility
principle:
(EUP) A rational agent prefers X to Y iff the expected
utility of X exceeds the expected utility of Y.
Economists usually take preference to be the more basic concept and
interpret the EUP as an implicit definition of the agent's utilities
(and sometimes also her probabilities).