Imagine you and I are walking down a long path. You are ahead,
but we can communicate on the phone. If you say, "there are strawberries here" and I trust you, I should not come to believe that there
are strawberries where I am, but that there are strawberries wherever
you are. If I also know that you are 2 km ahead, I should come to
believe that there are strawberries 2 km down the path. But what's the
general rule for deferring to somebody with self-locating beliefs?
I used to agree with Lewis that classical mereology, including
mereological universalism, is "perfectly understood, unproblematic,
and certain". But then I fell into a dogmatic slumber in which it seemed
to me that the debate over mereology is
somehow non-substantive: that there is no fact of the
matter. I was recently awakened from this slumber by a footnote in
Ralf Busse's forthcoming article "The
Adequacy of Resemblance Nominalism" (you should read the whole
thing: it's terrific). So now I once again think that Lewis was right. Let
me describe the slumber and the awakening.
Causal models are a useful tool for reasoning about causal
relations. Meek
and Glymour 1994 suggested that they also provide new resources to
formulate causal decision theory. The suggestion has been endorsed by
Pearl
2009, Hitchcock
2016, and others. I will discuss three problems with this proposal
and suggest that fixing them leads us back to more or less the
decision theory of Lewis
1981 and Skyrms
1982.
What makes the Sleeping Beauty problem non-trivial is Beauty's
potential memory loss on Monday night. In my view, this means that
Sleeping Beauty should be modeled as a case of potential epistemic
fission: if the coin lands tails, any update Beauty makes to her
beliefs in the transition from Sunday to Monday will also fix her
beliefs on Tuesday, and so the Sunday state effectively has two
epistemic successors, one on Monday one on Tuesday. All accounts of
epistemic fission that I'm aware of then entail halfing.
Decision theory comes in many flavours. One of the most important
but least discussed divisions concerns the individuation of
outcomes. There are basically two camps. One side -- dominant in
economics, psychology, and social science -- holds that in a
well-defined decision problem, the outcomes are exhausted by a
restricted list of features: in the most extreme version, by the
amount of money the agent receives as the result of the relevant
choice. In less extreme versions, we may also consider the agent's
social status or her overall well-being. But we are not allowed to
consider non-local features of an outcome such as the act that brought
it about, the state under which it was chosen, or the alternative acts
available at the time. This doctrine doesn't have a name. Let's call
it localism (or utility localism).
Necessitarian and dispositionalist accounts of laws of nature have
a well-known problem with "global" laws like the conservation of
energy, for these laws don't seem to arise from the dispositions of
individual objects, nor from necessary connections between fundamental
properties. It is less well-known that a similar, and arguably more
serious, problem arises for dynamical laws in general, including
Newton's second law, the Schrödinger equation, and any other law
that allows one to predict the future from the present.
Decision theory says that faced with a number of options, one
should choose an option that maximizes expected utility. It does not
say that before making one's choice, one should calculate and compare
the expected utility of each option. In fact, if calculations are
costly, decision theory seems to say that one should never calculate
expected utilities.
Informally, the argument goes as follows. Suppose an agent faces a
choice between a number of straight options (going left, going
right, taking an umbrella, etc.), as well as the option of calculating
the expected utility of all straight options and then executing
whichever straight option was found to have greatest expected
utility. Now this option (whichever it is) could also be taken
directly. And if calculations are costly, taking the option directly
has greater expected utility than taking it as a result of the
calculation.
I've moved all my websites to a new server. Let me know if you notice anything that stopped working.
(Philosophy blogging will resume shortly as well.)
I'm currently teaching a course on decision theory. Today we discussed
chapter 2 of Jim Joyce's Foundations of Causal Decision Theory,
which is excellent. But there's one part I don't really get.
Joyce mentions that Savage identifies acts with functions from states
to outcomes, and that Jeffrey once suggested representing such
functions as conjunctions of material conditionals: for example, if an
act maps S1 to O1 and S2 to O2, the corresponding proposition would be
(S1 → O1) & (S2 → O2). According to Joyce, this
conception of acts "cannot be correct" (p.62). That's the part I don't
really get.
A lot of what I do in philosophy is develop models: models of
rational choice, of belief update, of semantics, of communication,
etc. Such models are supposed to shed light on real-world phenomena,
but the connection between model and reality is not completely
straightforward.
For example, consider decision theory as a descriptive model of
real people's choices. It may seem straightforward what this model
predicts and therefore how it can be tested: it predicts that people
always maximize expected utility. But what are the probabilities and
utilities that define expected utility? It is no part of standard
decision theory that an agent's probabilities and utilities conform in
a certain way to their publicly stated goals and opinions. Assuming
such a link is one way of connecting the decision-theoretic model with
real agents and their choices, but it is not the only (and in my view
not the most fruitful) way. A similar question arises for the agent's
options. Decision theory simply assumes that a range of "acts" are
available to the agent. But what should count as an act in a
real-world situation: a type of overt behaviour, or a type of
intention? And what makes an act available? Decision theory doesn't
answer these questions.