Mistaken Intuitions

Some people intuit that

  • the subject in a Gettier case has knowledge;
  • Saul Kripke has his parents essentially;
  • "Necessarily, P and Q" entails "Necessarily, P";
  • whenever all Fs are Gs and all Gs are Fs, the set of Fs equals the set of Gs;
  • the liar sentence is both true and not true;
  • the conditional probability P(A|B) is the probability of the conditional "if B then A";
  • it is rational to open only one box in Newcomb's problem;
  • switching the door makes no difference in the Monty Hall problem;
  • propositions are not classes;
  • people are not swarms of little particles;
  • a closed box containing a duck weighs less when the duck inside the box flies;
  • spacetime is Euclidean;
  • there is a God constantly interfering with our world.

They are wrong. All that is false.

If a certain cognitive faculty produces lots of false judgements in lots of people, we should not trust our own application of that faculty. So if there were a faculty of intuition that produced all the false judgements just mentioned (and comparatively few true ones), we should be cautious about our own intuition faculty.

Fortunately, there is no such intuition faculty. Those judgements are normally the result of a great variety of cognitive processes.

In "Philosophical 'Intuitions' and Scepticism about Judgement", Timothy Williamson argues that all intuitions are beliefs and thus can be defended against skepticism by some principle of intentionality (the 'knowledge-maximization principle') which guarantees that beliefs are mostly true. It seems to me (and to Christian, with whom I've discussed all this last weekend) that even if the knowledge-maximization principle is correct (which it probably isn't, though it's hard to refute a principle that vague), it does not guarantee that most beliefs are true: might be that even when knowledge is maximized, most of most people's beliefs remain false, because people are opinionated and generelly not in the position to know very much. Anyway, even if we had a guarantee that most beliefs are true, this doesn't help much to assure us of all our intuitions.

For instance, suppose, as seems true, that humans are very bad at complex probability judgements: people regularly make widely false estimates about probability, but are nevertheless often very confident about these estimates. This shows, I believe, that we should not trust our intuitions about probabilities in complex cases. What does it help if people have generally true beliefs about, say, the current weather and the relative location of their body parts? As I mentioned above, our judgements (and beliefs) are produced by different cognitive faculties, so we must look at each of these faculties to find out if the corresponding judgements are to be trusted.

I think one should also distinguish between contingent and non-contingent judgements. (Williamson mixes them together as well.) Intuitions about the duck in the box or the geometry of spacetime concern contingent matters: things could be either way. It is a task for physics to find out which way they are. Here reliability means that things mostly are the way our intuitions say they are, and not any other way.

Not so for judgements about, say, logic or set theory. No faculty of any kind is needed to find out which way things are in logic and set theory, because here there is nothing to find out. There are no alternatives to exclude; there is no way for Frege's Axiom V (roughly, item 4 in my list) to be true.

When someone judges a subtle contradiction to be true, she is usually mistaken about the meaning of a certain sentence. If you judge item 4 to be true, you don't take yourself to live in one of the worlds where it is true. (There are none.) Rather, you fail to realize what the sentence says. Perhaps you mistake the biconditional for a simple (right-to-left) conditional.

The same holds for analytical truths. If GC is a sufficiently rich description of a Gettier case, and if it is true that in that case the subject doesn't know, then "if GC, the subject doesn't know" is analytically true. (For if its truth depends on empirical facts E, the description just wasn't rich enough: we should have taken GC+E as GC.) So someone who misclassifies the case either misunderstands the description or her own classification. Perhaps she takes "knowledge" to mean "true justified belief". (If she lives in a community where "knowledge" actually means "true justified belief", of course she doesn't misclassify the case at all.)

Just as we have several cognitive faculties to find out contingent matters of fact, we have many cognitive faculties to help us find the truth conditions of sentences. For Gettier-like cases and questions of de re modality, we usually imagine the described situation. Within limits, our faculty of applying simple predicates and names to actual situations also works quite well for merely imagined situations. (But that's contingent: there could be aliens who are disposed to classify all and only the F-things as "F", thereby making "F" express F-ness, but in imagination always classify F-things as "not F".) Estimating probabilities, judging mereological principles and proving theorems in set theory does not work that way. And even though our untrained faculty of applying concepts of probability to complex cases (imagined or real) does not work very well, we have created techniques to arrive at more reliable judgements. That's even more clear for set theory, where the official way of forming judgements is to prove them from simple axioms.

The axioms of a theory often define their theoretical terms, so we can be very confident that they -- or at least their Carnap sentences -- express the necessary proposition. Though Frege's axioms for arithmetic illustrates that even here, error is possible. The Principal Principle perhaps presents a similar case: just as we might be inclined to say that nothing deserves to be called "set" unless it satisfies item 4 in my list, we might be inclined to say that nothing deserves to be called "objective chance" unless it satisfies the PP. It turns out that in both cases, the principles are impossible to satisfy, but there are things that satisfy them in all ordinary cases. Then we might take those things as our sets and chances. Others may conclude that there are no sets or chances after all. Our linguistic conventions might leave open what is the correct response here.

They leave open a lot, I believe. Among other things, they also leave open whether XYZ on Twin Earth is water and whether propositions are sets. Since these are non-contingent matters, if they are not settled by linguistic convention, they are not settled at all. When, for the sake of a nice overall theory, we endorse one of the open alternatives -- identifying propositions with sets, say --, that's not a substantial metaphysical hypothesis, based on a special cognitive faculty, but a terminological decision.

Comments

# on 04 April 2005, 21:34

Okay, I'll bite. What's wrong with "Mecessarily, P and Q" entailing "Necessarily P"? I agree that the rest are false, except for the one about propositions, and I can see why someone would disagree.

# on 04 April 2005, 23:26

The trouble cases all involve sentences containing names, and none are really decisive. But there are a lot of them, and it seems to me that giving up the principle is probably the best idea. The principle certainly fails on Lewis's Counterpart Theory. (Cresswell mentions that in the recent AJP volume on Lewis; he cites Woollaston as his source.)

Here is an example. Suppose John has his ancestors essentially. So if John is the son of Mary, 1) "Necessarily, John is the son of Mary" is true, as is presumably 2) "Necessarily, John is the son of Mary and Mary has a child". But 3) "necessarily, Mary has a child" may well be false: we haven't assumed that Mary has her offspring essentially.

One could reject (2), but since 4) "Necessarily, if John is the son of Mary, then Mary has a child" is true, this would (also) mean rejecting that "Necessarily P" and "Necessarily, if P then Q" entail "Necessarily Q".

One could reject (1), and indeed the assumption behind (1) is what all the trouble cases have in common: that "Necessarily, Phi(A,B)" is true iff at all worlds *where A and B exist*, they satisfy Phi. One could instead say that "Necessarily, Phi(A,B)" is true iff A and B exist at all worlds and everywhere satisfy Phi. But this would imply that every contingent entity is possibly non-self-identical, which doesn't seem better.

But on second thought, I guess I should withdraw the two examples you mention, given what I say at the end of the entry: Our linguistic conventions probably leave it open exactly how "necessarily" behaves in the trouble cases. So it is partly a matter of stipulation to reject (or accept) the principle.

# on 06 April 2005, 05:07

I disagree about the Newcomb case...It's at least debatable which choice is rational. There are lot of really good decision theorists (Teddy Seidenfeld for instance) who argue for choosing one box.

Benny

# on 06 April 2005, 10:49

Yes. I wanted to add a few really hard cases, where it's not at all clear that either party a) makes a factual error or b) is mistaken about the meaning of relevant terms or c) uses these terms differently from the other party. Newcomb's problem is one of the best examples for this (no matter which side you're on).

# on 07 April 2005, 17:17

maybe i'm being a bit thick but i don't quite follow the response to dan's question. Aren't we mixing up our de dictos with our de re's?

Im always troubled by the role that intuitions are meant to play in philosophy. I guess my own view is that its okay to postulate mass error on behalf of the folk, or go against intuitions only if you have a good story to tell about why we've fallen into error.

Lastly, do people really have intuitions about whether propositions are classes? Philosophers might say its an intuition, fair enough, but i think this is a case of hyperbole more than anything else.

# on 07 April 2005, 21:16

I don't see that I've mixed de dicto and de re in any illegitimate way. Do you want to elaborate?

Re propositions, Plantinga vehemently insists that propositions aren't classes in "Two Concepts of Modality", on the ground that classes can't be true or false, nor believed etc. He doesn't offer any arguments for these claims. So I call them intuitions. Maybe I'm using the term losely.

# on 12 April 2005, 06:24

I think W may agree with your points about knowledge-maximization, as he says only that it "grounds a mild tendency" for beliefs to be true. Maximizing knowlege is not the same as maximizing true belief.

Was curious why you say the principle is probably false. (Agreed, it's vague. How about as stated on p. 140: A causal connection to an object is a channel for reference to it only if it is a channel for the acquisition of knowledge about it.)

# on 12 April 2005, 11:16

Hi, ok, but I don't see how a 'mild tendency' for beliefs to be true helps in Williamson's situation, if that tendency is compatible with the overwhelming majority of all beliefs being false. Doesn't he want to assure us (a priori) that our beliefs are mostly true?

The main reason why I suspect the principle to be false is that people sometimes do not know even when they are in a position to know: the philosopher who denies that there are mountains doesn't know that there are mountains; the student who sees the goldfinch doesn't know that it is a goldfinch (having forgotten the lessons), etc. The general point is that there are not just input constraints (causal links) but also output constraints (appropriate behaviour) on content attributions. Perhaps Williamson accepts these further constraints as well. If so, I wonder what work is left for the knowledge-maximisation principle.

# on 12 April 2005, 15:39

Okay, let me try again with the de re/de dicto thing.

We assume that john has his parents essentially. so
"L(John is the son of Mary)" is true. Then you move from this to
"L(John is the son of Mary and Mary has a child) is true". Which you then say doesnt imply "L(Mary has a child)". I guess i dont get the move from the first claim to the second. The necessity operator is initially de re with respect to John. So the claim we can move to is surely "(L(john is the son of Mary)) and Mary has a son". But here the Mary has a son conjunct has no modal prefix. Maybe im confused here. Secondly, the contested principle is fine if we are only concerned with de dicto modal claims. its only with de re claims that the problems arise, right?

On the plantinga point, i was never convinced by his claim that it is obvious that sets are never true. I think that is met with a classic response of "well, that sets arent true just isnt the kind of thing which is obvious". Plus, ontological identifications rarely are obvious. Its not obvious that water is (partilly) composed of hydrogen, but its true.

# on 12 April 2005, 16:13

Yes, I guess the principle is fine for de dicto sentences. Though I don't really understand the de re/de dicto distinction if it's not meant to involve quantifier scopes.

I wonder which step you reject:

1. The logical form of "Necessarily, John is the son of Mary" is "L(Sjm)".

2. The logical form of "Necessarily, John is the son of Mary and Mary has a child" is "L(Sjm & Ex(Cxm))".

3. Necessarily, if John is the son of Mary then Mary has a child.

4. The logical form of (3) is "L(Sjm -> Ex(Cxm))".

5. Whenever L(A->B) and L(A), then L(B).

I guess you either reject step 1 or step 5. Rejecting step 1 would probably mean that names can never occur in modal contexts, which indeed seems to block all counterexamples. Rejecting 5 seems not much better than rejecting L(A & B) -> L(A). Though 5 is strictly stronger. Maybe I should have used 5 as my example.

# on 12 April 2005, 17:19

Hmm... I guess i do reject one. Indeed, thinking about it, i take it that Lewis rejects one too, since i've always took it that the language of CT is bereft of names, since there is always some contextually salient description available to be plugged in.

# on 12 April 2005, 17:49

Ok. So what I claim is that there are *syntactical instances* of "Necessarily, P and Q" and "Necessarily P" in English of which the former does not entail the latter.

If we follow Lewis in rejecting names in CT, the relevant counterexamples will not have that same logical form. Indeed, if we follow Lewis, no sentence whatsoever will have the logical form "Necessarily P", because just as names are to be replaced by descriptions, "Necessarily" is to be replaced by a quantification over worlds.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.