When centering matters
Darks clouds are gathering. Soon it will be raining. When it does, I will believe that it is raining. I do not yet believe that it is raining even though I do believe that my well-informed future self will believe that it is raining. I thereby violate the 'Principle of Reflection'. Once we allow for centered propositions that change their truth-value between times and places, Reflection, like its close cousin Conditioning, become very implausible norms of rationality.
The problem isn't hard to see, nor is it new. So how come philosophers mostly ignore it? How come Conditioning and Reflection are so often discussed as if they weren't refuted by these implications?
Part of the explanation might be that philosophers in general don't believe in centered propositions. However, whether or not we call these things 'proposition' is besides the point. The question is whether rational degrees of belief are a matter of uncentered content alone, or whether they are also a matter of, say, centered 'modes of presentation' of the content -- that is enough to break Conditioning and Reflection. And it certainly seems as if 'essentially indexical' beliefs come in degrees just like other beliefs, and that they can be supported by essentially indexical evidence. When I receive evidence that it is me who is making the mess, or that my pants are on fire, or that I'm lost in the Stanford library, my credence in something goes up, and this something is not a mere uncentered proposition. It hard to believe that most philosophers would want to deny that. So there must be more to the explanation.
The other part of the explanation is presumably that the above problem only arises in specific cases which philosophers have mostly ignored. In particular, it seems to arise only when some relevant propositions are about to change their truth-value. When we consider the probability of a certain biological theory in the light of a new discovery about protein structures, neither the theory nor the evidence are likely to change their truth-value over night. So maybe here the problem can be safely ignored.
It wouldn't help if the problem only affected centered propositions. Almost anything we can say or think is, in the relevant sense, essentially centered. For instance, the theory that water expands when it freezes is probabilistically sensitive to centered information about the watery stuff in our surroundings, and less so to uncentered information about watery stuff somewhere in the universe. In addition, our theories are often restricted to nearby regions of the world; they are not meant to apply in far-away galaxies, in black holes, or right after the Big Bang. Fortunately, we don't need completely uncentered propositions to avoid the problems. It is enough if all relevant propositions are guaranteed to not change their truth-value in the near future. Call such propositions 'stable'.
Now what are the relevant propositions? For Reflection, it is whatever proposition the Principle is applied to. For Conditioning, we have two parameters: theory and evidence. And while many theories are stable, our total evidence is always unstable. Right now you're looking at the first half of this sentence; now you're looking at the second half. So we can't well ignore all cases where we receive unstable evidence.
But perhaps the unstable aspects of our evidence somehow don't matter. Perhaps stable propositions are never sensitive to unstable parts of our evidence?
Let S(E) be the stable part of E: the strongest stable proposition entailed by E. When E says that it is raining, then S(E) says that it is raining at some point in the near past or future. Our question is whether for stable A, the probability of A given E always equals the probability of A given S(E).
We could ask a parallel question about centered and uncentered propositions: are uncentered propositions ever sensitive to the centered aspects of our evidence? The answer is yes. If theory A says that observation E is made only once in the history of the universe, and theory B says it is made all the time, then E supports B more strongly than A, even though the uncentered fragment of E is neutral between the two.
This also applies to stability: if A says that there is thundering exactly once in the near past and future, while B says there is thundering all the time, then evidence of present thunder supports B more strongly than A -- note that P(thunder|B) = 1, and P(thunder|A) < 1, so by Bayes' Theorem, P(B|thunder) = P(B)/P(thunder) and P(A|thunder) < P(A)/P(thunder) -- but the stable fragment of this evidence does not.
A different type of case where unstable evidence affects stable propositions is this. Imagine a fair coin toss decides whether you will fission into two persons tonight. If the coin lands tails, you will fission and one of your successors will wake up at home, the other one in a lab. On heads, nothing happens and you'll wake up at home. When you wake up, you should -- before opening your eyes -- give some credence to being in the lab. Finding yourself at home rules out this possibility, which is a tails possibility, and therefore supports heads. Heads is stable, it is supported by your evidence E, but it is not supported by S(E), the proposition that at some point in the near past or future, one of your successors or predecessors wakes up at home.
Here is a third type of case. Theory A says it rains on Sunday, but not on Monday. Theory B says it rains on Monday, but not on Sunday. Without having any relevant evidence, you believe that it is Sunday. You open the window blinds and see that it is raining. Your evidence supports theory A; the stable part of your evidence does not.
So there are many ways in which unstable evidence can be relevant to stable propositions. It is not true that for stable A, the probability of A given E always, or even mostly, equals the probability of A given S(E). This means that if we as Bayesians ignore unstable propositions, we'll get the wrong results even when we're only interested in stable theories: we'll miss important parts of our evidence.
Nevertheless, if we're only interested in stable propositions, then there is often no particular problem for Conditioning, as long as we don't ignore unstable evidence. My future credence in it raining does not result from my present credence by conditioning on my future evidence. (Most obviously so if my present credence in rain is zero.) But my future credence in it raining at some point or other may well result from my present credence by conditioning on my future evidence.
But again, there are exceptions, and not only because my present credence in my future evidence may well be zero. Suppose I believe that a guy with a huge umbrella is about to walk past my window. Let A be the proposition that there is rain at some point in the near past or future. When in the near future, I will observe no raindrops outside my window, this will not affect my credence in A, even though my present credence in A conditional on there being no raindrops is low.
Conditioning is only safe if my present credence in A conditional on the relevant parts of my future evidence E equals my present credence in A conditional on E being my future evidence (in the jargon of my update paper, if P(A|E) = P(A<E)). This is guaranteed if E is stable, but it also holds in many other, more common situations.