Humean Everettian chances
Many of our best scientific theories make only probabilistic predications. How can such theories be confirmed or disconfirmed by empirical tests?
The answer depends on how we interpret the probabilistic predictions. If a theory T says 'P(A)=x', and we interpret this as meaning that Heidi Klum is disposed to bet on A at odds x : 1-x, then the best way to test T is by offering bets to Heidi Klum.
Nobody thinks this is the right interpretation of probabilistic statements in physical theories. Some hold that these statements are rather statements about a fundamental physical quantity called chance. Unlike other quantities such as volume, mass or charge, chance pertains not to physical systems, but to pairs of a time and a proposition (or perhaps to pairs of two propositions, or to triples of a physical system and two propositions). The chance quantity is independent of other quantities. So if T says that in a certain type of experiment there's a 90 percent probability of finding a particle in such-and-such region, then T entails nothing at all about particle positions. Instead it says that whenever the experiment is carried out, then some entirely different quantity has value 0.9 for a certain proposition. In general, on this interpretation our best theories say nothing about the dynamics of physical systems. They only make speculative claims about a hidden magnitude independent of the observable physical world.
Apart from subjectivist accounts that treat all statements of probability as somehow concerning degrees of belief (although not necessarily the beliefs of Heidi Klum), the main alternative to the "primitive quantity" view are Humean accounts. Roughly speaking, Humeans read probabilistic statements in physical theories as approximate statements about relative frequencies. If T assigns probability 0.9 to outcome O in experiment E, then T entails that if the setup E is sufficiently common, then outcome O occurs about once in every ten trials. There is no hidden quantity of chance; probabilistic physical theories make statements about ordinary physical quantities.
One advantage of the Humean interpretation is that it explains how one can confirm or disconfirm probabilistic theories. Again, if T says that 90 percent of E situations produce outcome O, then we can evaluate T by measuring the outcome in E situations. If we test 1000 instances of E and find roughly 900 cases with O, things are looking good for theory T.
It is worth thinking about why exactly this is true. Consider a single observation of O or not-O in a situation of type E. According to T, 90 percent of E situations produce O. Conditional on this assumption, to what degree should we expect to observe O in the case at hand? Neither logic nor probability theory dictates an answer. For example, you might be certain that none of the Es you observe are Os, even if the overall frequency of Os is 90 percent. Sometimes this may even be rational, if you have unusual evidence suggesting that the things you observe are different from the things you don't observe. But in the absence of such evidence, such an attitude would be irrational. You should expect the observed to resemble the unobserved. Metaphorically speaking, you should take the observed cases to be a random sample of all cases. It follows that you should assign degree of belief 0.9 to finding outcome O. In general,
(PPH) if Cr is a rational belief function and T says that x percent of E situations have outcome O, then in the absence of unusual evidence (and setting aside other unusual circumstances), Cr(O / E & T) = x.
In this way, the Humean interpretation makes sense of the way scientists actually go about testing probabilistic theories. They assume that if T assigns probability x to outcome O under condition E, then conditional on T one should expect this outcome to degree x. This is enshrined in a rule widely known as "Bayes' Theorem", which is actually a combination of (a) the theorem philosophers call "Bayes's Theorem", (b) the rule of conditionalisation, and (c) a version of the Principal Principle.
By contrast, our scientific practice makes no sense at all on the "primitive quantity" account: why should we expect outcome O in experiment E in proportion to the value some fundamental quantity assigns to this propositions, given that the quantity in no way constrains actual outcomes? Anti-Humeans agree that this cannot be explained. So they declare it a brute and basic norm of rationally.
So far, so familiar. Now let's turn to Everettian quantum mechanics (EQM, for short). On the surface, EQM is not a probabilistic theory. It describes the deterministic evolution of a wavefunction in a high-dimensional state space. Roughly speaking, every point in the space represents a multitude of classical physical states existing in parallel, each associated with a numerical value, its "branch weight". Over time, these systems "diverge" or "branch off" from one another, in the sense that interference effects between them become more and more unnoticable. Probability enters the picture because the branch weights are supposed to be treated just like objective probabilities. In particular, if we setup an experiment which, according to some hypothesis T about the wavefunction, produces outcome O on a branch with weight 0.9 and not-O on a branch with weight 0.1, we should expect observation of O to degree 0.9. Without this connection between branch weights and rational credence, EQM could not make sense of the way physicists actually test hypotheses about the wavefunction.
But what explains this link between branch weight and rational credence? Following the lead of anti-Humeans about chance, some friends of EQM have suggested that this is yet another basic norm of rationality. I find this idea just as absurd as in the case of chance. Others have suggested that the link can be derived from decision-theoretic principles about rational behaviour in an Everett world. I don't think these arguments work either.
But maybe we can give an interpretation of branch weights as something like Humean chances, in which case the Humean derivation of the Principal Principle would carry over.
Imagine our universe consists of isolated parts or "islands", and suppose things conform to different regularities in different islands. We should then be interested in theories that capture the regularities in our island. A theory which says that all Fs are Gs might not be true in the universe as a whole, but it might be true for our island. And that might be a useful thing to know. Similarly, if in our island 90 percent of Fs are Gs, a theory that assigns probability 0.9 to F given G could be useful and informative, even if it does not fit the frequencies in the entire universe.
An Everett world can be understood as a large collection of isolated more ordinary worlds: the branches. These sub-worlds display very different regularities. In some, coins always land heads, in others tails. Some are very irregular, with frequencies fluctuating widely between different regions of space and time. One of these sub-worlds is our own. This is the kind of situation in which it is not very useful to know about general regularities in the universe -- there are few of those -- but more useful to look at local regularities concerning the island we inhabit. Fortunately, our island is quite regular. (Arguably, it would be irrational to think otherwise, unless pressured by strong evidence.) Some of the regularities in our island are absolute ("all Fs are Gs"), others are stochastic ("90 percent of Fs are Gs"). These regularities are captured by the branch weights in EQM.
Suppose a hypothesis T about branch weights says that among branches in state F, those developing into state G have relative weight 0.9, while those developing into not-G have weight 0.1. Interpret this along Humean lines as an approximate claim about relative frequencies in our branch. Then (PPH) applies and we get the required link between branch weights and credence. The only difference is that we now assume that you should take your observations to be "random samples" from your island, rather than from the entire universe. This makes sense, since nobody can witness events from different islands.
It is tempting to think of an Everett world as consisting of many different branches associated with fixed weights and then ask on what grounds we should proportion our self-locating beliefs (concerning which branch is ours) to the branch weights. This makes the norm look very mysterious. If all branches are equally real, why should we assume we are twice as likely to be here rather than there just because that corresponds to some primitive physical measure on the branches? What about all the people in low-weight branches where coins keep coming up heads? Should they really give equal credence to heads and tails, as dictated by the branch weights, even though they are constantly defeated by experience? On the present suggestion, the Everettian branch weights are not an objective feature of the universe as a whole. They are an indexical measure, tuned to the events in our branch.
Could this work?