Truth-conduciveness and rational priors
We Bayesians are sometimes bugged about ultimate priors: what probability function would suit a rational agent before the incorporation of any evidence? The question matters not because anyone cares about what someone should believe if they popped into existence in a state of ideal rationality and complete empirical ignorance. It matters because the answer also determines what conclusions rational agents should draw from their evidence at any later point in their life. Take the total evidence you have had up to now. Given this evidence, is it more likely that Obama won the 2008 election or that McCain won it? There are distributions of priors on which your evidence is a strong indicator that McCain won. Nevertheless, this doesn't seem like it's a rational conclusion to draw. So there must be something wrong with those priors.
It isn't hard to come up with a list of constraints for rational priors. Rational priors should privilege scenarios where our senses are reliable, where the unobserved resembles the observed, where simple theories tend to be true, etc. But where does this list come from? Is it just what we happen to value, or what we happen to mean by `rational'?
I'd like to think otherwise. There is a reason why you shouldn't infer not-p every time somebody tells you p, or why you shouldn't endorse theories with hundreds of unrelated epicycles: doing so will lead you away from the truth. The ultimate epistemic goal is truth; the other norms are merely derivative.
Now we have a simple answer to the question about rational priors. A set of priors is rational to the extent that it leads to true beliefs. Hence the ideal priors assign 1 to every truth and 0 to every falsehood.
But that doesn't seem right either. We want to distinguish between rational people and people with lots of knowledge. The fact that we have access to Wikipedia doesn't make us much more rational than people in ancient Greece, even though it may give us much more knowledge. Irrational people in fortunate situations can know more than rational people in unfortunate situations. On the other hand, we do expect rational people to draw certain inference from their evidence, and from a Bayesian perspective this means nothing else than that we expect them to have certain beliefs (that the unobserved resembles the observed etc.). So why are some beliefs a sign of rationality, and others a mere sign of epistemically fortunate circumstances? Why is it okay to be confident -- without relevant evidence -- that our senses are reliable, but not to believe -- without relevant evidence -- that spacetime is non-Euclidean? Both features would bring us closer to the truth.
It seems that with respect to spacetime, things could easily have turned out otherwise. Rational priors aren't just ones that happen to lead to true beliefs in the particular circumstances of the agent; they would also lead to true beliefs if things were otherwise. The function that assigns 1 to every actual truth leads agents to wildly false beliefs in many counterfactual situations. Rational priors, one might say, are not just truth-conducive, they are robustly truth-conducive.
What does that mean? Perhaps rational agents should assign high prior probability to anything that is true in the actual world and all nearby possibilities. This is why one should not give high prior probability to Obama winning the 2008 election: there are nearby worlds where he lost, or didn't even compete.
But that is not quite right either. For one, it would be irrational to believe, without any evidence, that spacetime is non-Euclidean; but in what sense of `nearby' are there nearby worlds where spacetime is Euclidean, but no nearby worlds where we are brains in a vat? After all, the vat scenarios are nomologically possible and the Euclidean scenarios are not.
Secondly, suppose it turns out that Obama actually didn't win the presidency, there never were any 9/11 attacks, and nobody ever landed on the moon. It is all a big hoax. Would it then have been rational given our actual present evidence to believe that these events never happened? No. Rather, we'd be in a situation where people with irrational conspiracy theories happen to hit on the truth. Likewise, if we are brains in vats, or live in a world where the unobserved doesn't resemble the observed, it would still be irrational given our present evidence to give high credence to these possibilities. So it can't be a conceptual constraint on rational priors that they assign high credence to propositions that are true in the actual world and all nearby possibilities.
So what are robustly truth-conducive priors? They are priors that assign high probability only to propositions that couldn't easily have turned out otherwise -- but not given how the world actually is; not given anything at all. Spacetime could easily have turned out to be Euclidean in the sense that, for all we can tell a priori, there is a significant chance that we find spacetime to be Euclidean. In other words, we assign significant prior probability to situations with Euclidean spacetime.
So a distribution of priors is robustly truth-conducive to the extent that it coincides with our distribution of priors. Doesn't this make our list of rationality constraints arbitrary? If we happened to assign high prior probability to spacetime being non-Euclidean, we would judge that it could not easily have turned out that spacetime is Euclidean, and so we would judge that robust truth-trackers assign high prior probability to spacetime being non-Euclidean! -- True, but if we happened to assign high prior probability to spacetime being non-Euclidean, then we would be irrational; for it could easily have turned out that spacetime is Euclidean. And why should we care about what we would judge if we were irrational?
(Why do I think that, within limits, many different distributions of priors are equally rational? Perhaps because I don't actually have a determinate set of priors myself?)
Very interesting! Just to clarify: I take it you're positing that there are primitive facts of *metaphysical probability* -- weightings attached to possible worlds, indicating how likely they are (or were) to be actualized "not given anything at all"?