Mistaken Intuitions
Some people intuit that
- the subject in a Gettier case has knowledge;
- Saul Kripke has his parents essentially;
- "Necessarily, P and Q" entails "Necessarily, P";
- whenever all Fs are Gs and all Gs are Fs, the set of Fs equals the set of Gs;
- the liar sentence is both true and not true;
- the conditional probability P(A|B) is the probability of the conditional "if B then A";
- it is rational to open only one box in Newcomb's problem;
- switching the door makes no difference in the Monty Hall problem;
- propositions are not classes;
- people are not swarms of little particles;
- a closed box containing a duck weighs less when the duck inside the box flies;
- spacetime is Euclidean;
- there is a God constantly interfering with our world.
They are wrong. All that is false.
If a certain cognitive faculty produces lots of false judgements in lots of people, we should not trust our own application of that faculty. So if there were a faculty of intuition that produced all the false judgements just mentioned (and comparatively few true ones), we should be cautious about our own intuition faculty.
Fortunately, there is no such intuition faculty. Those judgements are normally the result of a great variety of cognitive processes.
In "Philosophical 'Intuitions' and Scepticism about Judgement", Timothy Williamson argues that all intuitions are beliefs and thus can be defended against skepticism by some principle of intentionality (the 'knowledge-maximization principle') which guarantees that beliefs are mostly true. It seems to me (and to Christian, with whom I've discussed all this last weekend) that even if the knowledge-maximization principle is correct (which it probably isn't, though it's hard to refute a principle that vague), it does not guarantee that most beliefs are true: might be that even when knowledge is maximized, most of most people's beliefs remain false, because people are opinionated and generelly not in the position to know very much. Anyway, even if we had a guarantee that most beliefs are true, this doesn't help much to assure us of all our intuitions.
For instance, suppose, as seems true, that humans are very bad at complex probability judgements: people regularly make widely false estimates about probability, but are nevertheless often very confident about these estimates. This shows, I believe, that we should not trust our intuitions about probabilities in complex cases. What does it help if people have generally true beliefs about, say, the current weather and the relative location of their body parts? As I mentioned above, our judgements (and beliefs) are produced by different cognitive faculties, so we must look at each of these faculties to find out if the corresponding judgements are to be trusted.
I think one should also distinguish between contingent and non-contingent judgements. (Williamson mixes them together as well.) Intuitions about the duck in the box or the geometry of spacetime concern contingent matters: things could be either way. It is a task for physics to find out which way they are. Here reliability means that things mostly are the way our intuitions say they are, and not any other way.
Not so for judgements about, say, logic or set theory. No faculty of any kind is needed to find out which way things are in logic and set theory, because here there is nothing to find out. There are no alternatives to exclude; there is no way for Frege's Axiom V (roughly, item 4 in my list) to be true.
When someone judges a subtle contradiction to be true, she is usually mistaken about the meaning of a certain sentence. If you judge item 4 to be true, you don't take yourself to live in one of the worlds where it is true. (There are none.) Rather, you fail to realize what the sentence says. Perhaps you mistake the biconditional for a simple (right-to-left) conditional.
The same holds for analytical truths. If GC is a sufficiently rich description of a Gettier case, and if it is true that in that case the subject doesn't know, then "if GC, the subject doesn't know" is analytically true. (For if its truth depends on empirical facts E, the description just wasn't rich enough: we should have taken GC+E as GC.) So someone who misclassifies the case either misunderstands the description or her own classification. Perhaps she takes "knowledge" to mean "true justified belief". (If she lives in a community where "knowledge" actually means "true justified belief", of course she doesn't misclassify the case at all.)
Just as we have several cognitive faculties to find out contingent matters of fact, we have many cognitive faculties to help us find the truth conditions of sentences. For Gettier-like cases and questions of de re modality, we usually imagine the described situation. Within limits, our faculty of applying simple predicates and names to actual situations also works quite well for merely imagined situations. (But that's contingent: there could be aliens who are disposed to classify all and only the F-things as "F", thereby making "F" express F-ness, but in imagination always classify F-things as "not F".) Estimating probabilities, judging mereological principles and proving theorems in set theory does not work that way. And even though our untrained faculty of applying concepts of probability to complex cases (imagined or real) does not work very well, we have created techniques to arrive at more reliable judgements. That's even more clear for set theory, where the official way of forming judgements is to prove them from simple axioms.
The axioms of a theory often define their theoretical terms, so we can be very confident that they -- or at least their Carnap sentences -- express the necessary proposition. Though Frege's axioms for arithmetic illustrates that even here, error is possible. The Principal Principle perhaps presents a similar case: just as we might be inclined to say that nothing deserves to be called "set" unless it satisfies item 4 in my list, we might be inclined to say that nothing deserves to be called "objective chance" unless it satisfies the PP. It turns out that in both cases, the principles are impossible to satisfy, but there are things that satisfy them in all ordinary cases. Then we might take those things as our sets and chances. Others may conclude that there are no sets or chances after all. Our linguistic conventions might leave open what is the correct response here.
They leave open a lot, I believe. Among other things, they also leave open whether XYZ on Twin Earth is water and whether propositions are sets. Since these are non-contingent matters, if they are not settled by linguistic convention, they are not settled at all. When, for the sake of a nice overall theory, we endorse one of the open alternatives -- identifying propositions with sets, say --, that's not a substantial metaphysical hypothesis, based on a special cognitive faculty, but a terminological decision.
Okay, I'll bite. What's wrong with "Mecessarily, P and Q" entailing "Necessarily P"? I agree that the rest are false, except for the one about propositions, and I can see why someone would disagree.