A few more comments on why I think the setup of Weinberg, Nichols and Stich's experiments on intuitions is unfortunate. The problem seems particularly obvious in the experiments on semantic intuitions reported by Machery, Mallon, Nichols and Stich, but I think it carries over to many (though perhaps not all) of the experiments of Weinberg, Nichals and Stich. Here is one of the questions Machery, Mallon, Nichols and Stich asked:
I don't understand what's so bad about admitting that people may use and understand the same words in slightly different ways.
Suppose there is a community of Martians who have a word for true
justified belief, but no word for knowledge. When these Martians learn
English, they might at first take "knowledge" to be synonymous with
their word: the difference hardly shows up in ordinary contexts. So when they use "knowledge", they mean true justified belief.
Eliezer Yudkowsky, in his Intuitive Explanation of Bayesian Reasoning, argues that it is irrational to justify the belief that if a biological war will break out it won't wipe out humanity by pointing out that one is an optimist:
p(you are currently an optimist | biological war occurs within ten years and wipes out humanity) =
p(you are currently an optimist | biological war occurs within ten years and does not wipe out humanity)
I'm preparing an introductory course on Wissenschaftstheorie that I'm supposed to teach next semester in the institute of library science. Unfortunately, the textbooks currently available in German are not nearly as good as many English ones.
Another (related) problem is that I'm not sure what Wissenschaftstheorie actually is. Well, I believe it is roughly the same as philosophy of science. But looking through German textbooks and the course guide of my predecessor, apparently some people think it also includes some or all of history and sociology of science, general epistemology, methodology, logic, philosophy of language, and stuff like hermeneutics and dialectics (whatever that is). I guess I'll stick to philosophy of science, even if that means using old textbooks by Carnap and Hempel.
The new tree prover has now replaced the old one at /logik/trees. I'm still grateful for all kinds of feedback, even if it's just letting me know that it works on your machine. Unlike the old version, which has retired to /logik/oldtrees/, the new one certainly doesn't work any more with Internet Explorer on the Mac and Netscape 4. If you're still using these browsers, you should really consider switching to Mozilla (or, on MacOS Classic, WaMCom).
If people disagree about whether a sentence S is true in a thought experiment, what could explain the disagreement?
1) They disagree about the meaning of S. Perhaps one party uses 'zombie' for revived corpses whereas the other uses it for people without phenomenal consciousness. The disagreement is 'merely verbal'.
That's not to say it isn't a serious disagreement, in particular if both parties think their usage corresponds to the folk conception, that is, if what they disagree about is whether S is true in the thought experiment according to the common, conventional usage of S in their community. In this case the disagreement can't be resolved by mere stipulation.
If you've followed this blog for a while, you'll have noticed that I'm occasionally worried about the status of shared truth-conditions in a linguistic community. Here's my current opinion.
First the problem. We can use language to communicate how things are. By saying "I have a headache" I can let you know that I have a headache roughly because it is common knowledge between us that people typically utter the words "I have a headache" only when they have a headache. In general, a sentence S can be used to convey the information that certain conditions obtain only if both speaker and hearer know that the hearer will take an utterance of S as evidence that the conditions obtain. Let's call those conditions the 'truth conditions of S'. (The name is a bit misleading because it is often used for the counterfactual conditions under which S would be true. In this sense, the truth conditions of "water isn't H2O" are nowhere satisfied. But clearly that sentence could and can be used to convey information, so these counterfactual conditions aren't the truth-conditions I'm talking of. The truth-conditions I'm talking of are the sentence's A-intensions.)
So Lewis says that a language L is used by a population P iff there prevails in P a convention of truthfulness and trust in L.
This requirement for language use seems far too strong, given Lewis's account of conventions.
The most obvious problem is the condition that for a
regularity to be a convention, it must be common knowledge in the
population that it is a convention. Lewis offers some weak readings of this condition, but even his weakest versions rule out that
sufficiently many members of the population may doubt or deny that the
regularity is a convention. So if there were sufficiently many French
speakers who believe that their language is completely innate, they would not partake in the convention of truthfulness and trust in French, and thus not use French, on Lewis's account. It even suffices if sufficiently many French speakers merely believe that there are enough who believe that, or believe that there are enough who believe that there are enough who believe it.
The main difference between Lewis's account of language use in
Convention and his account in "Languages and Language" (and later works) is that in the latter the convention required for a language L to be used is a convention of truthfulness and trust in L, whereas in the former it was only a convention of truthfulness. I wonder if there are any good reasons for this change.
Suppose in a certain community there exists a convention of truthfulness in L. On Lewis's analysis of conventions this means that within the community,
I'm always worried when a philosopher claims that it's a virtue of his theory that it rules out certain kinds of scepticism, or when a philosopher criticizes another philosopher (say, a contextualist) for not doing so.
I suppose it would be a good thing if newspapers always told the truth. But what would you say if I offered you a theory on which it is ruled out a priori that something false could be written in a newspaper? That wouldn't be a point in favour of my theory. For it seems intuitively obvious that something false could be written in a newspaper. A theory isn't good just because it entails something which, if true, would be good.