I'm certain that I went by the Mountains
(This is more or less the talk I gave at the "Epistemology at the Beach" workshop last Sunday.)
"A wise man proportions his belief to the evidence", says Hume. But to what evidence? Should you proportion your belief to the evidence you have right now, or does it matter what evidence you had before? Frank Arntzenius ("Some problems for conditionalization and reflection", JoP, 2003) tells a story that illustrates the difference:
...there is an ancient law about entry into Shangri La: you are only allowed to enter, if, once you have entered, you no longer know by what path you entered. Together with the guardians you have devised a plan that satisfies this law. There are two paths to Shangri La, the Path by the Mountains, and the Path by the Sea. A fair coin will be tosssed by the guardians to determine which path you will take: if heads you go by the Mountains, if tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will for ever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But just as you enter Shangri La, your memory of this Beauteous Journey will be erased and replaced by a memory of the Journey by the Mountains.
Suppose that in fact you travel by the Mountains. How will your degrees of belief develop?
Since you know that the coin is fair, you will at first assign credence 1/2 to heads. Later, when you've seen the coin land heads and have started your trip by the Mountains, you will be certain that the coin has landed heads, and that you are going by the Mountains. What happens when you arrive at Shangri La? Arntzenius argues that your credence in heads should go back to to 1/2 (and he seems to have convinced everybody in the literature). Here is his argument.
...once you have arrived, you will revert to having degree of belief 1/2 in heads. For you will know that you would have had the memories that you have either way, hence you know that the only relevant information that you have is that the coin was fair.
By 'information', I suppose Arntzenius means what I would call 'evidence': the total information that is available to you from experience, introspection and memory. Given your background knowledge about the setup, this information indeed doesn't help you to determine whether the coin landed heads or tails: the probability for ending up with your present evidence is the same either way, hence the evidence lends no support to heads or tails.
But it doesn't follow that your credence in heads should revert to 1/2. This follows only by the
Present Evidence Principle: what you should believe at a certain time is a matter of the evidence you have at that time.
I believe this principle is false, or at least not determinately true. What's true is this weaker principle:
Evidence Principle: what you should believe at a certain time is a matter of the evidence you have at that or earlier times.
The two principles come apart when evidence is lost, i.e. when one has evidence for a certain proposition at an earlier time, but no longer has evidence for it later. Such cases are rare in everyday life because we can to some extent figure out our previous attitudes by introspection. For instance, I've forgotten how I came to believe that carbon has atomic number 6; I've lost whatever evidence I originally had for this. But I have new evidence for it now: my inclination to judge that carbon has atomic number 6. This is evidence that I once learned that proposition, and thereby for its truth.
To distinguish between the Present Evidence Principle and the weaker Evidence Principle, we have to look at somewhat unusual cases where this kind of introspective evidence is useless or unavailable. In the Shangri La case, it is useless. Perhaps even more telling are cases where it is unavailable.
Imagine you're shopping for a robot to pick up your tennis balls and put them into green baskets. I have two models on offer: the Cartesian model and the Conservative. Both have sensory devices by which they collect information about their environment, and a little register that stores the relative location of yellow balls and green baskets in their surrounding. This register determines the robots' movements. Neither model has internal sensory systems to introspect their prior inclination to make judgments or the like. The difference between the two models is how the register gets updated over time. The Conservative model leaves information in the register until it encounters evidence against what is stored there. The Cartesian erases its register at every instant and rewrites it with the information it gets from its sensory system at that moment. Thus when the Cartesian spots a basket in the corner, it will register this fact, but as soon as it turns around to pick up a ball, the information gets erased as it is no longer supported by the newly available evidence.
The Conservative model will obviously do its task much better than the Cartesian. Is this because the setup favours irrational agents, as when a powerful predictor rewards people for making irrational choices? I don't think so. In just about any reasonable situation, the Conservative model will end up with a better representation of its environment, and be more successful. I don't want to say that the Cartesian way is definitely irrational. All I want to say is that the Conservative way is not irrational either. This is enough to show that the Present Evidence Principle is false.
Now return to Shangri La, where unlike the robots you have the relevant introspective evidence about your previous evidence, but don't trust it. The Present Evidence Principle then advocates resetting your credences to 1/2, as if you had never learned which way you traveled. The Evidence Principle would allow you to remain confident that you went by the Mountains. Is there something to be said for or against these options?
I think one can make a case that it is at least not irrational to remain confident in the Mountain possibility (and again, that's enough to undermine the Present Evidence Principle). To be sure, it would be irrational to trust your episodic Mountain memories once you arrive at Shangri La, knowing that you would have them either way. I don't say that you should infer from the evidence available to you at this point that you probably went by the Mountains. That would be to follow the Present Evidence Principle. The proposal at issue is that you may remain confident that you went by the Mountains despite the fact that your present evidence is neutral on this matter.
Why would this be rational? Because, intuitively, one shouldn't radically change one's mind about something unless one receives evidence that is relevant to it. As we saw, the evidence you receive at Shangri La is irrelevant to whether or not you went by the Mountains: the conditional probability for the evidence is the same no matter which way you traveled. In fact, we can assume that before arriving, you knew exactly what experiences you will have at Shangri La. At this point, you were still certain that the coin landed heads and that you go by the Mountains. Once you arrive, you have the experiences you expected anyway. You learn nothing about the world you didn't already know before. Therefore you shouldn't change your mind about what the world is like.
Of course, if you are convinced of the Present Evidence Principle, these considerations will not move you. The Principle entails that one sometimes ought to radically change one's mind even when one receives no relevant evidence and doesn't learn anything new. As always, my modus tollens will be your modus ponens. But principles of epistemic rationality don't fall from the sky. If the Present Evidence Principle is correct, there must be a good reason for it, and I doubt that there is. The ultimate epistemic goal is truth, and agents who follow the Present Evidence Principle are bad truth-trackers; they constantly lose valuable information.
hey wo,
it's not clear to me how your robot example is supposed to help show that the present evidence principle is false. why isn't the information currently stored in the conservative's register (but acquired earlier) present evidence?