Decision theory says that faced with a number of options, one
should choose an option that maximizes expected utility. It does not
say that before making one's choice, one should calculate and compare
the expected utility of each option. In fact, if calculations are
costly, decision theory seems to say that one should never calculate
expected utilities.
Informally, the argument goes as follows. Suppose an agent faces a
choice between a number of straight options (going left, going
right, taking an umbrella, etc.), as well as the option of calculating
the expected utility of all straight options and then executing
whichever straight option was found to have greatest expected
utility. Now this option (whichever it is) could also be taken
directly. And if calculations are costly, taking the option directly
has greater expected utility than taking it as a result of the
calculation.
I've moved all my websites to a new server. Let me know if you notice anything that stopped working.
(Philosophy blogging will resume shortly as well.)
I'm currently teaching a course on decision theory. Today we discussed
chapter 2 of Jim Joyce's Foundations of Causal Decision Theory,
which is excellent. But there's one part I don't really get.
Joyce mentions that Savage identifies acts with functions from states
to outcomes, and that Jeffrey once suggested representing such
functions as conjunctions of material conditionals: for example, if an
act maps S1 to O1 and S2 to O2, the corresponding proposition would be
(S1 → O1) & (S2 → O2). According to Joyce, this
conception of acts "cannot be correct" (p.62). That's the part I don't
really get.
A lot of what I do in philosophy is develop models: models of
rational choice, of belief update, of semantics, of communication,
etc. Such models are supposed to shed light on real-world phenomena,
but the connection between model and reality is not completely
straightforward.
For example, consider decision theory as a descriptive model of
real people's choices. It may seem straightforward what this model
predicts and therefore how it can be tested: it predicts that people
always maximize expected utility. But what are the probabilities and
utilities that define expected utility? It is no part of standard
decision theory that an agent's probabilities and utilities conform in
a certain way to their publicly stated goals and opinions. Assuming
such a link is one way of connecting the decision-theoretic model with
real agents and their choices, but it is not the only (and in my view
not the most fruitful) way. A similar question arises for the agent's
options. Decision theory simply assumes that a range of "acts" are
available to the agent. But what should count as an act in a
real-world situation: a type of overt behaviour, or a type of
intention? And what makes an act available? Decision theory doesn't
answer these questions.
There has been a lively debate in recent years about the
relationship between graded belief and ungraded belief. The debate
presupposes something we should regard with suspicion: that there is
such a thing as ungraded belief.
Compare earthquakes. I'm not an expert on earthquakes, but I know
that they vary in strength. How exactly to measure an earthquake's
strength is to some extent a matter of convention: we could have used
a non-logarithmic scale; we could have counted duration as an aspect
of strength, and so on. So when we say that an earthquake has
magnitude 6.4, we characterize a central aspect of an earthquake's
strength by locating it on a conventional scale.
Philosophers (and linguists) often appeal to judgments about the
validity of general principles or arguments. For example, they judge
that if C entails D, then 'if A then C' entails 'if A then D'; that
'it is not the case that it will be that P' is equivalent to 'it will
be the case that not P'; that the principles of S5 are valid for
metaphysical modality; that 'there could have been some person x such
that actually x sits and actually x doesn't sit' is an unsatisfiable contradiction; and so on. In my view, such judgments
are almost worthless: they carry very little evidential weight.
The following principles have something in common.
Conditional Coordination Principle.
A rational person's credence in a conditional A->B should equal the
ratio of her credence in the corresponding propositions B and A&B;
that is, Cr(A->B) = Cr(B/A) = Cr(B)/Cr(A&B).
Normative Coordination Principle.
On the supposition that A is what should be done, a rational agent
should be motivated to do A; that is, very roughly, Des(A/Ought(A))
> 0.5.
Probability Coordination Principle.
On the supposition that the chance of A is x, a rational agent
should assign credence x to A; that is, roughly, Cr(A/Ch(A)=x) = x.
Nomic Coordination Principle.
On the supposition that it is a law of nature that A, a rational agent
should assign credence 1 to A; that is, Cr(A/L(A)) = 1.
All these principles claim that an agent's attitudes towards a certain
kind of proposition rationally constrain their attitudes towards other
propositions.
Humeans about laws of nature hold that the laws are nothing over
and above the history of occurrent events in the world. Many
anti-Humeans, by contrast, hold that the laws somehow "produce" or
"govern" the occurrent events and thus must be metaphysically prior to
those events. On this picture, the regularities we find in the world
are explained by underlying facts about laws. A common argument
against Humeanism is that Humeans can't account for the explanatory
role of laws: if laws are just regularities, then then laws can't
really explain the regularities — so the charge —
since nothing can explain itself.
In discussions of the raven paradox,
it is generally assumed that the (relevant) information gathered from an
observation of a black raven can be regimented into a statement of the
form Ra & Ba ('a is a raven and a is
black'). This is in line with what a lot of "anti-individualist" or
"externalist" philosophers say about the information we acquire
through experience: when we see a black raven, they claim, what we
learn is not a descriptive or general proposition to the effect that
whatever object satisfies such-and-such conditions is a black raven,
but rather a "singular" proposition about a particular object --
we learn that this very object is black and a raven. It seems
to me that this singularist doctrine makes it hard to account for many
aspects of confirmation.
Take the usual language of first-order logic from introductory
textbooks, without identity and function symbols. The vast majority of
sentences in this language are satisfied in models with very few
individuals. You even have to make an effort to come up with a sentence
that requires three or four individuals. The task is harder if you
want to come up with a fairly short sentence. So I wonder, for any given number n, what is the shortest
sentences that requires n individuals?