It is natural to think of a possible world as something like an extremely specific story or theory. Unlike an ordinary story or theory, a possible world leaves no question open. If we identify a theory with a set of propositions, a possible world could be defined as a theory T which is
- maximally specific: T contains either P or ~P, for every proposition P;
- consistent: T does not contain P and ~P, for any proposition P;
- closed under conjunction and logical consequence: if T contains both P and Q, then it contains their conjunction P & Q, and if T contains P, and P entails Q, then T contains Q.
It is often useful to go in the other direction and identify propositions with sets of possible worlds. We can then analyse entailment as the subset relation, negation as complement and conjunction as intersection. Of course, we may not want to say that a world is a (non-empty) set of (consistent) propositions and also that a consistent proposition is a non-empty set of worlds, since these sets should eventually bottom out. But that doesn't seem very problematic, and it is easily fixed as long as there is a simple 1-1 correspondence between worlds and logically closed, consistent and maximally specific theories. In particular, one might suspect that on the present definitions, every logically closed, consistent and maximally specific theory uniquely corresponds to a possible world, namely the sole member of the intersection of the theory's members.
Imagine you're a hedonist who doesn't care about other people, nor
about your past or your distant future. All you care about is how much
money you can spend today. Fortunately, you're on a pension that pays
either $100 or $1000 every day, plus an optional bonus. How much you
get is determined as follows. Every morning, a psychologist shows up
to study your brain. Then he puts two boxes in front of you, one
opaque, the other transparent. You can choose to take either both boxes or
only the opaque one. The transparent box contains a $10 bill. The
opaque box contains nothing if the psychologist has predicted that you
will take both boxes; if he has predicted that you will take one box,
it contains $100. The psychologist's predictions are about 99%
accurate. The content of your boxes is your bonus payment. In addition, you get your
ordinary payment, which is either $100 or $1000 depending on how many
boxes you took the previous day: if you took both, you now get $1000,
otherwise $100. The ordinary payment is given to you before the psychologist
studies your brain, so by the time you choose between the two boxes, you already
know how much you received. What do you do?
I will probably be in Germany from about mid May until the end of June this year.
If beliefs are modeled by a probability distribution over centered
worlds, belief update cannot work simply by conditionalisation. How
then does it work? The most popular answer in philosophy goes as
follows.
Let P an agent's credence function at time t1, P' the credence function
at t2, and E the evidence received at t2. Since E is a centered
proposition, it can be true at multiple points within a world.
Suppose, however, that the agent assigns probability 0 to worlds at
which E is true more than once. Then to compute P', first
conditionalise P on the uncentered fragment of E -- i.e. the strongest
uncentered proposition entailed by E. This rules out all worlds at
which E is true nowhere. Second, move the center of each remaining
world to the (unique) point at which E is true.
Alice is randomly selected from her population to be tested for a
rare genetic disorder that affects about one in 10,000 people. The
test is accurate 99 percent of the time, both among subjects that have
the disorder and among subjects that don't. Alice's test comes back
positive.
Call the information in the previous paragraph E, and suppose it's
all you know about the situation. How confident are you that Alice has
the disorder?
Letting our subjective probabilities be guided by the stated
frequencies, we can use Bayes' Theorem to figure out that P(disorder |
positive) = P(positive | disorder) * P(disorder) / (P(positive |
disorder) * P(disorder) + P(positive | ~disorder) * P(~disorder)) =
0.99 * 0.0001 / (0.99 * 0.0001 + 0.01 * 0.9999) = 0.0098. Assume then
that your degree of belief is about 0.01.
Expressions like 'P(A/B)', or 'the probability of A given B', seem
to be used in various different ways. On one usage, P(A/B) equals
P(AB)/P(B), at least if P(B) > 0. Call this the ratio
usage. Simple versions of the ratio usage define P(A/B) as
P(AB)/P(B), and so entail that P(A/B) is undefined whenever
P(B)=0. But I would like to admit views into the family on which
P(A/B) is taken as a primitive binary probability, governed by
something like the Popper-Renyi conditions.
One might suggest that for any English sentence S, 'S is true' has the
same meaning as S. Assuming compositionality, it would follow that the
two are intersubstitutable in every context. But they are not.
First of all, they are not intersubstitutable in attitude reports
and speech reports. I don't think this is very problematic because such
reports are partly quotational, and of course expressions with the
same meaning aren't always intersubstitutable inside quote marks. But
'S is true' and S are also not intersubstitutable in simple
intensional contexts, as witnessed by examples like
There are familiar semantic paradoxes for "truth" and "reference", such as the Liar paradox and Berry's paradox. I would have thought that there should be similar paradoxes for "expression", i.e. for the relation between a sentence S and the proposition expressed by S. A quick duckduckgo search didn't come up with anything. Pointers?
Here is a Liar-style one I came up with myself. Assume propositions are sets of worlds (which is the case I'm interested in). Consider the sentence
E: E expresses the empty set.
If E is true, then the proposition it expresses contains the actual world, in which case E doesn't express the empty set. So E can't be true. Since we've just proved not-E from no empirical assumptions, ~E expresses the set of all worlds. Hence E expresses the empty set. So E is true. Contradiction.
Yet another paper on counterpart-theoretic semantics: Generalising Kripke Semantics for Quantified Modal Logics. This one is a bit more technical than the others. I use a broadly counterpart-theoretic model theory to construct completeness proofs for very basic quantified modal logics, such as the combination of positive free logic and K. I also play around with adding an object-language substitution operator. There are some unfinished sections at the end, but since I haven't been working on this since January, I thought I might as well upload the current version. All the proofs are spelled out in detail, which makes the whole thing ridiculously long.
I'm not much of a logician, so I'd be very interested to hear if this looks like it is worth pursuing any further.