In the last entry, I have suggested that
EEP) P_2(A) = P_1(+A|+E)
is a sensible rule for updating self-locating beliefs. Here, E is the
total evidence received at time 2 (the time of P_2), and '+' denotes a
function that shifts the evaluation index of propositions, much like
'in 5 minutes': '+A' is true at a centered world w iff A is true at
the next point from w where new information is received. (EEP) therefore
says that upon learning E, your new credence in any proposition A
should equal your previous conditional credence that A will obtain at the next
point when information comes in, given that this
information is E.
I've been participating in a couple of workshops here at ANU lately,
and I thought I'd share some notes. First, we had a little Sleeping Beauty workshop where Terry Horgan
and Mike Titlebaum defended thirding, and me halfing. Unfortunately, I
think we didn't quite get to the heart of our disagreement. Each of us
said their own thing, without saying enough about what's wrong with
the reasoning of the other sides. So I'll do that here. I start with
Terry's account.
We Bayesians are sometimes bugged about ultimate priors: what
probability function would suit a rational agent before the
incorporation of any evidence? The question matters not because anyone
cares about what someone should believe if they popped into existence
in a state of ideal rationality and complete empirical ignorance. It
matters because the answer also determines what conclusions rational
agents should draw from their evidence at any later point in their
life. Take the total evidence you have had up to now. Given this
evidence, is it more likely that Obama won the 2008 election or that
McCain won it? There are distributions of priors on which your
evidence is a strong indicator that McCain won. Nevertheless, this
doesn't seem like it's a rational conclusion to draw. So there must be
something wrong with those priors.
Here are some notes on Stalnaker's account of self-locating
beliefs, in chapter 3 of Our Knowledge of the Internal
World. I find the discussion there slightly intransparent, so I'll
start with a presentation of what I take to be Stalnaker's account,
but in my own words. This will lead to a few objections further
down.
We start with extreme haecceitism. Every material object and every
moment in time has, in addition to its normal, qualitative properties
also a non-qualitative property, its 'haecceity', that distinguishes
it from everything else. My haecceity belongs to me with metaphysical
necessity, and could not belong to anyone else. Moreover, it is my
only (non-trivial) essential property. (This is the 'extreme' part in
extreme haecceitism.) In this world, I am a human being, but in other
worlds, I am a cockatoo, or a poached egg. My haecceity is freely
combinable with any qualitative property.
Stalnaker holds a combination of views that seem independent to me, but
closely connected to him. One is a kind of reductive naturalism about
intentionality. On this view, the point of attributing beliefs and desires is to
give a high-level characterisation of the subject's behavioural
dispositions, their functional architecture and their causal relations
to the environment. Another of Stalnaker's views is externalism about mental
content. This says that intentional characterisations are relational:
even when two subjects are perfect intrinsic and functional
duplicates, they may still differ in their beliefs and desires,
depending on what objects and properties they are causally related to.
Continuing the topic of the last post, suppose I'm certain that no-one
else in the history of the universe ever had (or will have) exactly the experiences
that I have now. Then I can 'translate' any centered proposition into
an uncentered propositions in such a way that the translation is certain to preserve
truth-values. For instance, "it is raining" gets translated into
"it is raining at all times and places where someone has such-and-such
experiences". In this case, one might think, purely centered information can never
affect my uncentered beliefs. For purely centered information only distinguishes
between multiple centers within a single world; but if no world has multiple
possible centers, then there is nothing to learn from such information.
(This line of reasoning is related to what Mike Titelbaum says
in his forthcoming paper "The Relevance of Self-Locating Beliefs", though I
don't think Mike would endorse the argument I present here.)
Darks clouds are gathering. Soon it will be raining. When it does, I will
believe that it is raining. I do not yet believe that it
is raining even though I do believe that my well-informed future self
will believe that it is raining. I thereby violate the 'Principle of
Reflection'. Once we allow for centered propositions that change their truth-value between times
and places, Reflection, like its close cousin Conditioning, become very implausible
norms of rationality.
The OPP feed has been a bit bumpy recently because I made a few changes to the code. Things should be running smoothly again from today. The biggest changes are a bit of OCR to parse scanned documents (about 25% of all PDFs) and an improved algorithm to detect authors and co-authors. If you have contacted me about anything else: that has also been fixed.
I've finally finished rewriting my paper on self-locating belief dynamics. Here is the new version. The presentation is quite different from before. In particular, I no longer use transition probabilities in modeling Cartesian agents, which allows me to be more specific about what these probabilities are. There's also a new section where I try to show that traditional arguments in support of conditioning turn into arguments for my revised rule when self-locating beliefs are considered.
Speaking of chapter six, Williamson here argues that the sentence
1) if an animal escaped from the zoo, it would be a monkey
is not adequately formalized as
1')
on the grounds that according to (1'), even the elephants are such that they would be monkeys if they escaped from the zoo. Williamson suggests that an adequate formalization might rather go like this: