Belief update: shifting, pushing, and pulling
It is widely agreed that conditionalization is not an adequate norm for the dynamics of self-locating beliefs. There is no agreement on what the right norms should look like. Many hold that there are no dynamic norms on self-locating beliefs at all. On that view, an agent's self-locating beliefs at any time are determined on the basis of the agent's evidence at that time, irrespective of the earlier self-locating belief. I want to talk about an alternative approach that assumes a non-trivial dynamics for self-locating beliefs. The rough idea is that as time goes by, a belief that it is Sunday should somehow turn into a belief that it is Monday.
Again, we can distinguish at least two approaches, which I will call "pulling" accounts and "pushing" accounts".
Let's start with "pulling" accounts. Suppose you know that your earlier credence function was Cr, and that a certain time interval i has passed since the time of your earlier credence. Pulling accounts say that your new credence should then equal Cr, shifted by i, and then conditionalized on your new evidence. Informally speaking, the shifting operation moves the centre of each possible world in your doxastic space forward by i. More precisely, if we model centred worlds as triples of an uncentred world, an individual, and a time, then the shifted probability of any centred world (w,j,t) equals the unshifted probability of the corresponding "earlier" world (w,j,t-i). For example if Cr is shifted by 24 hours, then the new credence in "it is Monday" equals the old credence in "it is Sunday".
What if you're not sure how much time has passed? Some pulling accounts here fall silent. However, we can naturally extend the above idea to cases where you have some probability distribution P over possible time intervals. To shift Cr by P, the initial credence of any centred world (w,j,t) is divided over all worlds (w,j,t+i) in proportion to P(i). More precisely, the new probability of any world (w,j,t) equals the sum of Cr((w,j,t'+i))P(i) over all worlds (w,j,t') and intervals i such that t'+i=t. (The operation may be familiar under the name "generalized imaging".) For example, if Cr gives probability 1 to "it is Sunday" and P(24 hours) = 0.8 and P(48 hours) = 0.2, then the shifted credence of "it is Monday" is 0.8.
According to pulling accounts, the update process of self-locating beliefs can thus be divided into three steps. First, you need to figure out how much time might have passed since the earlier credence. Perhaps an "inner sense of time" will help with that. Second, you use this information to shift the earlier credence function. Third, you conditionalize the resulting function on your new evidence.
On "pushing" accounts, the shifting step is determined not by the later stage but by the earlier stage. (Hence my labels.) To see how this works, we first need to clarify a point that pulling accounts tend to ignore.
We're interested in dynamical norms relating earlier belief states and later belief states. In this context, we can't assume that the two belief states are arbitrarily far apart: if Cr is an agent's credence at t1, Cr' is her credence at t3, and there's an intermediate time t2 at which the agent receives important evidence, then obviously Cr' should not result from Cr merely by conditionalizing on the evidence at t3, which would ignore the important intermediate evidence received at t2. Dynamic norms such as conditionalization therefore apply in the first place only to temporally adjacent stages, although we can of course skip stages at which no relevant information arrives.
So we can assume that the relevant centred worlds (w,j,t) can be ordered by a relation of epistemic successorhood: (w,j,t') is the epistemic successor of (w,j,t) iff (w,j,t') is the next point in the epistemic history of agent j at time t in world w.
The epistemic successor relation allows us to define a shifting operation on credence functions that does not require an externally given interval of time. To shift Cr by one step, simply move the probability of any world (w,j,t) to its epistemic successor (w,j,t').
The full update process has only two steps. First, your old credence function is shifted by one step, then the shifted credences are conditionalized on your new evidence.
A nice feature of pushing accounts is that all evidence is treated alike, and simply by conditionalization. There's no special rule for evidence about how much time has passed. You may well have an inner sense of time, but that is handled just like your other senses.
More seriously, pulling accounts go wrong in the following kind of case.
Suppose you're about to be put into a coma for emergency medical treatment. You are 90% confident that you'll be awakened after one week, and 10% confident that you'll already be awakened after a day. Your inner sense of time is not attuned to comas, so upon awakening (absent relevant further evidence) it strongly suggests to you that no more than one or two days have passed. You knew all along that you would have this sensation. So what you should do is ignore it and be 90% confident that you were asleep for a week. That is, your previous beliefs should be shifted in accordance with your previous opinions about how much time is going to pass, not by your later sense of how much time has passed.
Let's verify that pulling accounts go wrong here. Your earlier credence is divided 90/10 between "week worlds" where you're about to be in a coma for a week and "day worlds" where the coma lasts only a day. Let's say that upon awakening your sense of time gives 90% probability to the hypothesis that one day has passed, 5% to the hypothesis that 2 days have passed, and 1% to the hypothesis that a week has passed. In the shifting step, we redistribute the earlier probability of each world (w,j,t) in such a way that 90% comes to lie on (w,j,t+1), 5% on (w,j,t+2), and 1% on (w,j,t+7). Note that if (w,j,t) is a week world, then (w,j,t+1) is a situation where you're still in a coma, as is (w,j,t+2). So once you conditionalize on the information that you're awake, the probability of (w,j,t+1) and (w,j,t+2) will go down to 0. The problem is that conditionalization not only moves probability around within uncentred worlds but also between such worlds. Moreover, the shifting step shifts much less probability from week worlds (w,j,t) to their 7-day successors (w,j,t+7) than it shifts from day worlds (w,j,t) to their 1-day successors (w,j,t+1). For example, if (w1,j,t) is a week world with initial probability 0.1, then (w1,j,t+7) gets 1% of that: 0.001; if (w2,j,t) is a day world with initial probability 0.01, then (w2,j,t+1) gets 90%, i.e. 0.009. Conditionalizing preserves ratios among worlds compatible with the evidence, so after conditionalization, (w2,j,t+1) will be 9 times more probable than (w1,j,t+7). The net result is that you will become confident that you were only in a coma for a day.
The coma example is not an isolated special case. It only dramatizes something that happens all the time. When people lose track of time, they often have previous opinions about how much time will pass, and those opinions don't always agree perfectly with their later sensations of how much time might have passed.
So pulling accounts are not only more complicated. They are wrong. Nonetheless, they seem to be quite popular. Why is that?
Part of the reason, I suspect, is a tendency to look at belief update from the perspective of the later evidential state. "Here I am", one imagines, "at the later time. I remember my earlier credence function Cr, but I know it's out of date. I need to shift it, but by how much? Let's see what else is given to me. Ahh, I have this sensation that so-and-so much time has passed. That's the missing information."
From this perspective, the situation is analogous to a situation where you're given the credence function of an expert to whom you would like to defer in every respect. Of course you don't want to simply take over their self-locating beliefs. If they believe that they are hungry, you should come to believe that they are hungry, not that you are hungry. So you need to "shift" or "recentre" their credence function. In order to acquire any non-trivial self-locating opinions, you then need to know how they are related to yourself. For example, if they believe that it is raining, and you have no idea where they are relative to you, then you can't infer whether it is raining where you are, or to the North or East or West etc. So it's natural to postulate a pulling-type procedure: first figure out how the expert is related to yourself, then shift their credence function by that relation.
But what if the expert also has opinions about how you are related to them? If it seems to you that you are 10 meters North of the expert, and the expert is confident that nobody is 10 meters South of them, the shifted expert credence will mostly lie on points where there's no person. You might rule out such points by conditionalizing on the information that you are a person ("cogito"), but the resulting credences will still not respect the expert's opinions about your relative location, for the same reason as above.
Some such distortion is probably unavoidable. The expert's credence function tells you something about the world from their perspective. If you want to endorse their opinions, you need to locate yourself off-centre in those worlds. To that end, you arguably have to appeal to independent self-locating information. How else could you know which of the many agents (and non-agents) inhabiting the worlds in the expert's doxastic space you might be? Even if the agent has strong beliefs about how you are related to them, this doesn't help unless you somehow recognize the person so-related as yourself. So you must have independent self-locating information, unless you only want to take over the expert's uncentred beliefs. If you're lucky, the independent self-locating information may perfectly harmonize with the expert's beliefs, but it may well not. If it doesn't, there's no way to fully defer to the expert.
Fortunately, the diachronic situation is a lot easier. It is not analogous to the expert situation. In the diachronic case, the credence function you are given to update is your own previous credence function. It is not some credence function from an arbitrary earlier time, or from some arbitrary other person. Norms of diachronic rationality pertain in the first place only to successive stages of the same epistemic agent. You don't need any independent evidence that Cr is your previous credence function. It is so by definition. Or rather, it is misleading to say that Cr is somehow "given". Belief update is not an operation on the new evidence alone. Dynamic norms are norms on how belief states should evolve over time, from one stage of an agent to the next. A sensible update process should exploit that fact.