Albert is a One-Boxer

There's something odd about Albert's reasoning:

If that stranger's predictions are true, he probably is a time traveler. I want him to be a time traveler. Therefore I should try to make his predictions come true.

The problem is that by trying to make the predictions come true, Albert decreases the evidential support their truth lends to the claim that the stranger is a time traveler.

Consider a simplified case where you claim to know what I'll be doing at exactly this time next year: that I will raise my left hand. And indeed I will. In general, such knowledge is very surprising, and the fact that you are right supports the assumption that something physically weird is going on. But this effect is entirely canceled if I know your prediction and do my best to make it true. Then it's not at all surprising that I will raise my hand, and therefore this fact doesn't support any weird assumptions.

Albert's case is not quite like that. For if you try hard to make true a detailed description of your life in the next 20 years, it is extremely unlikely that you'll succeed. What's so unlikely is that you will always finds yourself in the predicted situations where you can make the predicted decisions.

Nevertheless, even in Albert's case the evidence for the stranger being a time traveler would be stronger if his predictions came true despite the fact that Albert did not try to make them true. So by trying to make them true, Albert slightly increases the probability of their truth, but at the same time slightly decreases their surprisingness. The effects cancel each other, and the net effect of Albert's decision on the time traveler assumption is zero.


I believe there is something else that's odd about his reasoning. He thinks it's rational to do something that constitutes evidence for a desirable proposition even though there may not be any causal relation between what he does and the truth of the proposition. That sounds like one-boxer reasoning! (A one-boxer is someone who opens only one box in Newcomb's problem.)

However, Albert's story has to be modified significantly to turn it into a Newcomb problem. Thus:

First we have to make sure that whether or not Albert follows the predictions has no causal influence on whether the desired proposition is true. In the original story the desired proposition is that the stranger is Albert's future self. So if time travel involves backward causation, one might say that whatever Albert does after hearing the predictions will be causally relevant to what he does in the future and therefore also to whether he will travel back in time and emerge as that stranger.

Hence our first change is that Albert doesn't really care about whether the stranger is his future self. He only wants him to be a time traveler. Let's say this would be worth a million dollars to him. (Remember how useful it can be to have informants from the future e.g. on the stock market.)

Second, we have to make sure that following the predictions really does constitute evidence for the desired proposition. As I just argued, this is not so in the original story. One way to make it so is to postulate a psychological (non-strict) law according to which only predictable people can be time travelers. Here is another way, closer to the Newcomb story:

Albert's stranger is one of many strangers who have recently emerged from the labs of the Foundation for Rewarding Predictable Behaviour. Each of these strangers meets a certain person and tells him that he will do certain things in the next hour (and if you want also that this and that will happen on the stock market). Then the stranger disappears. But not all strangers tell the truth. Some of them are wrong. They aren't really time travelers and just guess what will happen. Since the Foundation rewards predictability, they have assigned the real time travelers to predictable targets -- that is, to persons who are likely to try to realize the stranger's predictions --, and the false ones to unpredictable targets.

Now Albert's stranger predicts that he will do various things in the lab. But Albert would rather stay in bed and watch TV. That would be worth 10 dollars to him. On the other hand, if he tries to do all those things in the lab, that will be strong evidence that the stranger is a real time traveler, which would be worth a million to Albert. Should he take the 10 dollars, i.e. watch TV, or work in the lab?

One-boxers reason: The probability that the stranger is a time traveler is much higher if I try to do what he says than if I don't. The Foundation's judgements are very reliable: 99% of those that have followed the predictions have then found that their stranger is really a time traveler. That risk is worth taking. So I should give up the 10 dollars and work in the lab.

Two-boxers reason: Either this guy is a time traveler or not. I can't change that by doing silly things in the lab. So I rather take the 10 dollars.

Well, that story was quite different from the original one. Nevertheless, the original Albert is also a one-boxer. For his reasoning shows that he wrongly believes to be in a situation that very much resembles the revised story: He believes that following the predictions is evidence for the desired time traveler assumption; and he ignores the causal relation between what he will do and whether the stranger really is his future self. On these assumptions, his conclusion is only justified by one-boxer reasoning.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.