If people disagree about whether a sentence S is true in a thought experiment, what could explain the disagreement?
1) They disagree about the meaning of S. Perhaps one party uses 'zombie' for revived corpses whereas the other uses it for people without phenomenal consciousness. The disagreement is 'merely verbal'.
That's not to say it isn't a serious disagreement, in particular if both parties think their usage corresponds to the folk conception, that is, if what they disagree about is whether S is true in the thought experiment according to the common, conventional usage of S in their community. In this case the disagreement can't be resolved by mere stipulation.
If you've followed this blog for a while, you'll have noticed that I'm occasionally worried about the status of shared truth-conditions in a linguistic community. Here's my current opinion.
First the problem. We can use language to communicate how things are. By saying "I have a headache" I can let you know that I have a headache roughly because it is common knowledge between us that people typically utter the words "I have a headache" only when they have a headache. In general, a sentence S can be used to convey the information that certain conditions obtain only if both speaker and hearer know that the hearer will take an utterance of S as evidence that the conditions obtain. Let's call those conditions the 'truth conditions of S'. (The name is a bit misleading because it is often used for the counterfactual conditions under which S would be true. In this sense, the truth conditions of "water isn't H2O" are nowhere satisfied. But clearly that sentence could and can be used to convey information, so these counterfactual conditions aren't the truth-conditions I'm talking of. The truth-conditions I'm talking of are the sentence's A-intensions.)
So Lewis says that a language L is used by a population P iff there prevails in P a convention of truthfulness and trust in L.
This requirement for language use seems far too strong, given Lewis's account of conventions.
The most obvious problem is the condition that for a
regularity to be a convention, it must be common knowledge in the
population that it is a convention. Lewis offers some weak readings of this condition, but even his weakest versions rule out that
sufficiently many members of the population may doubt or deny that the
regularity is a convention. So if there were sufficiently many French
speakers who believe that their language is completely innate, they would not partake in the convention of truthfulness and trust in French, and thus not use French, on Lewis's account. It even suffices if sufficiently many French speakers merely believe that there are enough who believe that, or believe that there are enough who believe that there are enough who believe it.
The main difference between Lewis's account of language use in
Convention and his account in "Languages and Language" (and later works) is that in the latter the convention required for a language L to be used is a convention of truthfulness and trust in L, whereas in the former it was only a convention of truthfulness. I wonder if there are any good reasons for this change.
Suppose in a certain community there exists a convention of truthfulness in L. On Lewis's analysis of conventions this means that within the community,
I'm always worried when a philosopher claims that it's a virtue of his theory that it rules out certain kinds of scepticism, or when a philosopher criticizes another philosopher (say, a contextualist) for not doing so.
I suppose it would be a good thing if newspapers always told the truth. But what would you say if I offered you a theory on which it is ruled out a priori that something false could be written in a newspaper? That wouldn't be a point in favour of my theory. For it seems intuitively obvious that something false could be written in a newspaper. A theory isn't good just because it entails something which, if true, would be good.
A zombie world is a world physically just like our world but in which
there is no consciousness. Must a type-A materialist deny the
conceivability of zombie worlds? No, not quite.
Compare the rather uncontroversial hypothesis that "the HI virus" denotes the (type of) virus responsible for most AIDS infections. Is it
conceivable that a world could be biologically just like ours but not
contain the HI virus? Yes, for it might turn out that scientists
have been wrong all the time and no virus is involved in most AIDS
infections. If it turned out this way, our own world would be a world
biologically just like ours but not containing the HI virus.
Let P be a proposition of which you neither believe that it's true nor that it's false, say Goldbach's Conjecture. Since you know that you don't believe P (otherwise you couldn't have chosen it), your conditional subjective probability for [P and I don't believe P] given P should be close to 1. However, if you were to learn that P, your subjective probability for [P and I don't believe P] shouldn't be close to 1, but close to 0. So is this a case were you shouldn't conditionalize?
Merlin is bound to disappear at noon, taking with him all physical
traces of his existence. Shortly before his magic disappearance, he
casts a spell. As a result, at noon on the following day, the prince
turns into a frog.1
In virtue of what does the spell cause the metamorphosis? For
instance, it is not at all clear that by Lewis's standards of
similarity, some world containing neither spell nor metamorphosis is
more similar to actuality than any world not containing the spell but
containing the metamorphosis. The problem is that the only trace left
by the spell, after Merlin's magic disappearance, is the
metamorphosis itself:
Suppose you and I both face a choice between several different
options. Say, we both have to pick a ball out of a bag of 100
balls. We win a prize if we make the same choice. But we have no means
to communicate. Moreover, our only relevant interest is to win the prize, otherwise we are completely indifferent about the options.
If one of the options is somehow salient, say one ball is red and all the others white, most people will choose that one. And
wisely so, as many people following this strategy win the prize,
whereas hardly anyone picking a white ball does. However, is this a
rational decision among perfectly rational agents who know of
each other's rationality and preferences? (I also assume that the
agents know that they make exactly the same judgements about
salience.)
Last week the RSI got worse, and this week I've spent some more time on the tree prover. Here's the current version. It works in Mozilla and (slower) Opera on Linux, and doesn't work in Konqueror. I don't have any other browsers here, so feedback on how it behaves especially in Safari and Internet Explorer is welcome.
The prover is generally faster and more stable than the old one. But it still does badly on some formulas, like ((x(PxRx)y(QySy))(zPzzQz)xy((PxQy)(RxSy))). There are some improvements (e.g. merging) under the hood that would improve the performance, but are currently turned off because they make it very hard to translate the resulting free-variable tableau into a sentence tableau. My plan is to turn these features on automatically when a proof search takes too long, and not to display a tree in that case. I'm also thinking about trying to find simpler proofs after a first proof has been found: the tableau for the above formula doesn't look like it's the smallest possible proof.
To improve the detection of invalid formulas, I've added a very simple countermodel finder. What it does is simply check all possible interpretations of the root formula on the sets { 0 }, { 0,1 }, etc. This works surprisingly well since many interesting invalid formulas have a countermodel with a very small domain. The countermodels are currently not displayed, but that will change soon.