I finally found the decision theory puzzle that I posted recently in a series of papers by Reed Richter from the mid 1980s. I'm not convinced by Richter's treatment though, and I'm still somewhat puzzled.
Here is Richter's version:
Button: You and another person, X, are put in separate rooms where each of you faces a button. If you both push the button within the next 10 minutes, you will (both) receive 10 Euros. If neither of you pushes the button, you (both) lose 1000 Euros. If one of you pushes and the other one doesn't, you (both) get 100 Euros.
What would you do? Most people, I guess, would push the button. After all, if you don't push it, there is a high risk of losing 1000 Euros. For how how can you be certain that X won't do the same? On the other hand, if you push the button, the worst possible outcome is a gain of 10 Euros.
Suppose beliefs locate us in centered logical space: to believe something is to rule out not only ways a universe might be, but ways things might be for an individual at a time. Then there will be two kinds of rational belief change: we can learn something new about our present situation, and we can change our situation and adjust our beliefs to this change. The rule for changes of the first kind is conditionalization. The rule for changes of the second kind doesn't have an official name yet, as far as I know. (In the AGM/KM framework, it is called "update", but we Bayesians often use "update" for conditioning.) In practice, the two rules always go hand in hand: you never learn something new without changing your situation, and you hardly ever change your situation without learning anything new.
In this paper, I try to spell out the two rules, and their combination: Believing in afterlife: conditionalization in a changing world (PDF).
I'm a bit unhappy with some parts of the story, and I should probably say more about alternative accounts in the literature, and why I don't like them. So hopefully there will be an update soon. In the meantime, comments are as always very welcome!
Here is a little script I wrote to create DVDs from avi movies with subtitles in vobsub format (.idx and .sub) on Linux: dvdiso. Apparently, DeVeDe can't do that.
Mostly, when we don't believe something, we don't know it either. But arguably not always. The timid student thinks she's merely guessing, while in fact she knows. She knows, but she lacks the confidence required for belief. It would be nice to have an analysis of knowledge that allowed for such cases, but also explained why they are rare.
Lewis's analysis tries to do that. On Lewis's account, you know p iff your evidence rules out any relevant situation where ~p. Among the rules for what counts as 'relevant', the 'rule of belief' tells us that any possibility with non-negligible subjective probability counts as relevant. Now suppose you don't believe p. Then you give non-negligible probability to ~p situations. So you know p only if your evidence rules out all those ~p situations. Moreover, your present evidence 'rules out' a situation iff you have different evidence in that situation than you actually have. So if you have knowledge without belief, you must assign positive probability to situations where you have different evidence than you actually have. On a suitable understanding of evidence, those cases will be rare, because we are normally confident that we have the evidence that we have.
This is a follow-up to the previous post on Shangri La. As before, the story is that a fair coin decides which path you take to Shangri La: on heads, you travel by the Mountains, on tails, by the Sea. If you arrive at Shangri La via the Sea, the guardians will replace your Sea memories with Mountain memories.
In the other post, I said that if you actually traveled by the Mountains, you should remain confident that you traveled by the Mountains, even though you would have ended up with the same evidence had you traveled by the Sea.
(This is more or less the talk I gave at the "Epistemology at the Beach" workshop last Sunday.)
"A wise man proportions his belief to the evidence", says Hume. But to what evidence? Should you proportion your belief to the evidence you have right now, or does it matter what evidence you had before? Frank Arntzenius ("Some problems for conditionalization and reflection", JoP, 2003) tells a story that illustrates the difference:
...there is an ancient law about entry into Shangri La:
you are only allowed to enter, if, once you have entered, you no
longer know by what path you entered. Together with the guardians you
have devised a plan that satisfies this law. There are two paths to
Shangri La, the Path by the Mountains, and the Path by the Sea. A fair
coin will be tosssed by the guardians to determine which path you
will take: if heads you go by the Mountains, if tails you go by the
Sea. If you go by the Mountains, nothing strange will happen: while
traveling you will see the glorious Mountains, and even after you
enter Shangri La you will for ever retain your memories of that
Magnificent Journey. If you go by the Sea, you will revel in the
Beauty of the Misty Ocean. But just as you enter Shangri La, your
memory of this Beauteous Journey will be erased and replaced by a
memory of the Journey by the Mountains.
This is probably old, so pointers to the literature are welcome. Consider this game between Column and Row:
| C1 | C2 |
R1 | 0,0 | 2,2 |
R2 | 2,2 | 1,1 |
What should Column and Row do if they know that they are equally rational and can't communicate with one another? The game doesn't have a Nash equilibrium has no unique Nash equilibrium, nor is there a dominant strategy (Thanks Marc!), so perhaps there is no determinate answer.
A coin is to be tossed. Expert A tells you that it will land heads with probability 0.9; expert B says the probability is 0.1. What should you make of that?
Answer: if you trust expert A to degree a and expert B to degree b and have no other relevant information, your new credence in heads should be a*0.9 + b*0.1. So if you give equal trust to both of them, your credence in heads should be 0.5. You should be neither confident that the coin will land heads, nor that it will land tails. -- Obviously, you shouldn't take the objective chance of heads to be 0.5, contradicting both experts. Your credence of 0.5 is compatible with being certain that the chance is either 0.1 or 0.9. Credences are not opinions about objective chances.
What about this much simpler argument for halfing:
As usual, Sleeping Beauty wakes up on Monday, knowing that she will have an indistinguishable waking experience on Tuesday iff a certain fair coin has landed tails. Thirders say her credence in the coin landing heads should be 1/3; halfer say it should be 1/2.
Now suppose before falling asleep each day, Beauty manages to write down her present credence in heads on a small piece of paper. Since that credence was 1/2 on Sunday evening, she now (on Monday) finds a note saying "1/2".
I've written a short paper arguing that the Absentminded Driver paradox is based on the thirder solution to Sleeping Beauty, and can be neatly explained away by adopting the halfer solution: "The Absentminded Driver: no paradox for halfers". As always, comments are very welcome.