Alice is randomly selected from her population to be tested for a
rare genetic disorder that affects about one in 10,000 people. The
test is accurate 99 percent of the time, both among subjects that have
the disorder and among subjects that don't. Alice's test comes back
positive.
Call the information in the previous paragraph E, and suppose it's
all you know about the situation. How confident are you that Alice has
the disorder?
Letting our subjective probabilities be guided by the stated
frequencies, we can use Bayes' Theorem to figure out that P(disorder |
positive) = P(positive | disorder) * P(disorder) / (P(positive |
disorder) * P(disorder) + P(positive | ~disorder) * P(~disorder)) =
0.99 * 0.0001 / (0.99 * 0.0001 + 0.01 * 0.9999) = 0.0098. Assume then
that your degree of belief is about 0.01.
Expressions like 'P(A/B)', or 'the probability of A given B', seem
to be used in various different ways. On one usage, P(A/B) equals
P(AB)/P(B), at least if P(B) > 0. Call this the ratio
usage. Simple versions of the ratio usage define P(A/B) as
P(AB)/P(B), and so entail that P(A/B) is undefined whenever
P(B)=0. But I would like to admit views into the family on which
P(A/B) is taken as a primitive binary probability, governed by
something like the Popper-Renyi conditions.
One might suggest that for any English sentence S, 'S is true' has the
same meaning as S. Assuming compositionality, it would follow that the
two are intersubstitutable in every context. But they are not.
First of all, they are not intersubstitutable in attitude reports
and speech reports. I don't think this is very problematic because such
reports are partly quotational, and of course expressions with the
same meaning aren't always intersubstitutable inside quote marks. But
'S is true' and S are also not intersubstitutable in simple
intensional contexts, as witnessed by examples like
There are familiar semantic paradoxes for "truth" and "reference", such as the Liar paradox and Berry's paradox. I would have thought that there should be similar paradoxes for "expression", i.e. for the relation between a sentence S and the proposition expressed by S. A quick duckduckgo search didn't come up with anything. Pointers?
Here is a Liar-style one I came up with myself. Assume propositions are sets of worlds (which is the case I'm interested in). Consider the sentence
E: E expresses the empty set.
If E is true, then the proposition it expresses contains the actual world, in which case E doesn't express the empty set. So E can't be true. Since we've just proved not-E from no empirical assumptions, ~E expresses the set of all worlds. Hence E expresses the empty set. So E is true. Contradiction.
Yet another paper on counterpart-theoretic semantics: Generalising Kripke Semantics for Quantified Modal Logics. This one is a bit more technical than the others. I use a broadly counterpart-theoretic model theory to construct completeness proofs for very basic quantified modal logics, such as the combination of positive free logic and K. I also play around with adding an object-language substitution operator. There are some unfinished sections at the end, but since I haven't been working on this since January, I thought I might as well upload the current version. All the proofs are spelled out in detail, which makes the whole thing ridiculously long.
I'm not much of a logician, so I'd be very interested to hear if this looks like it is worth pursuing any further.
I've thought a bit about counterpart-theoretic semantics last year, both for natural language and for quantified modal logic. Here's a paper in which I present my preferred version of this framework as applied to natural language: Counterpart Theory and the Paradox of Occasional Identity. Apart from the semantics itself, my main claim is that the advantages of counterpart semantics do not require the metaphysics of "counterpart theory".
Here is another paper which covers related grounds, but from a more logical point of view: How Things are Elsewhere: Adventures in Counterpart Semantics. Comments on either paper are very welcome.
I've just replaced the Online Papers in Philosophy Feed by a newer version. Let me know if you run into any problems with that. (You may also consider switching to a feed from PhilPapers.)
Have I mentioned that the source code for the scripts that generate the feed is on github? Well, now I have.
(While I'm in the swing of mentioning, I might as well also mention (i) that my paper on updating self-locating beliefs is forthcoming in Phil Studies, (ii) that I won't be at the AAP this year, although I will be at various other events, like here, here and there, and (iii) that Holly and I are not "in a relationship" any more. In case you wondered about any of these.)
To some extent, one can account for semantic phenomena without
assigning meanings to words or sentences or thoughts. For instance, we
might say that beliefs and other attitudes are relations to
sentences, i.e. to strings of symbols. Roughly, to believe a
sentence S is to be disposed to utter (or assent to) S (or some
translation of S) under certain conditions. When people talk to each
other, such dispositions may be transferred: after hearing
me utter the sounds "it is raining", you acquire the disposition to
utter those sounds yourself. Apart from communication, we can also
account for things like synonymy and analyticity. Roughly, two sentences
are synonymous if necessarily, anyone who stands in the belief
relation to one of them also stands in the belief relation to the
other. There is no compositional semantics in this picture, because
there is no semantics at all. But there might be recursive rules for
translating from one language to another.
A lot has been written in the last 10 years or so on updating
self-locating beliefs, mostly in the context of the Sleeping Beauty
problem. One thing almost all of these papers have in common is that
they quote Lewis's remark in "Attitudes de dicto and de se" (1979,
p.534), where he says:
it is interesting to ask what happens to decision theory
if we take all attitudes as de se. Answer: very little. We replace the
space of worlds by the space of centered worlds, or by the space of
all inhabitants of worlds. All else is just as before.
This is supposed to imply that Lewis took standard
conditionalisation to be the correct update rule for self-locating
belief.
Professor Procrastinate has to make an important phone call. The
call is long overdue because Procrastinate has been playing Farmville
all week. The problem is that Procrastinate values current pleasure
higher than future pleasure. So when he applies his decision theory,
he finds that it is better to play some more Farmville now and make
the phone call later instead of making the call now: it doesn't matter much
whether the call is delayed by a few more hours, and this way the
immediate future will be much more pleasant.