Luminosity and Infallibility

Tim Williamson argues that no interesting conditions are such that if they obtain, then one is in a position to know that they obtain. I'll try to show that his argument fails for all conditions for which one can only non-inferentially believe that they obtain if they really do obtain. It seems to me that many interesting conditions -- probably including feeling cold and knowing that one feels cold -- are of this kind. I haven't checked the secondary literature, so what I'm going to say is probably old. Anyway, here goes.

Let's assume this plausible sounding principle:

Rel) If one truly believes p but would still believe p under slightly different conditions where p is not the case, then one does not know p.

Now let a_1 be a situation where one feels very cold which slowly, via a_2, a_3 etc., turns into a situation a_n where one doesn't feel cold at all. Williamson argues that because of (Rel),

1) if at some a_i one knows that one feels cold, then at a_i+1 one still feels cold.

For suppose one knows that one feels cold at a_i. Then one believes that one feels cold at a_i; and since a_i and a_i+1 are virtually indistinguishable, one still believes (to virtually the same degree) that one feels cold at a_i+1. By (Rel), this belief must be true. Hence at a_i+1, one feels cold.

This, Williamson argues, shows that feeling cold is not luminous, that is, it is not the case that

2) Whenever one feels cold, one knows that one feels cold.

For (1) and (2) together entail that since one clearly knows that one feels cold at a_1, one would feel cold at a_n, which by hypothesis is false.

On first sight, Williamson's argument resembles two Sorites arguments for the same conclusion, based, respectively, on the principles

Sor1) If at some a_i one feels cold, then at a_i+1 one still feels cold.
Sor2) If at some a_i one knows that one feels cold, then at a_i+1 one still knows that one feels cold.

But as Williamson shows, his argument is not of this kind. Since I'll need this later on, let me quickly explain how the Sorites arguments might be blocked. One way is to assume that there is a sharp boundary between feeling cold and not feeling cold, and between knowing that one feels cold and not knowing that one feels cold, so that (Sor1) and (Sor2) are not true for all a_i. Another (better) answer is that even though there are no sharp boundaries, (Sor1) and (Sor2) are false because statements involving vague terms are true only if they are true on all sharpenings of their vagueness. Yet another (even better) reply is that (Sor1) and (Sor2) are true because they are true on almost all sharpenings, but that Modus Ponens is not truth-preserving for vague sentences since it doesn't preserve truth-on-almost-all-sharpenings.

Unlike the Sorites arguments, Williamson's argument still goes through on any sharpening of the relevant terms: suppose on some sharpening of "feeling cold", one feels cold at a_i, but not at a_i+1. Then it seems that one cannot know that one feels cold at a_i, as otherwise one would have the same belief at a very similar situation a_i+1 where that belief is false, contradicting (Rel).

However, Williamson's argument presupposes that at a_i+1, one still believes that one feels cold. But suppose the concepts of feeling cold and (non-inferentially) believing that one feels cold are linked in such a way that all their sharpenings exactly coincide. That is, suppose feeling cold beliefs satisfy this weak form of infallibility:

Inf) If one non-inferentially believes that p, then p.

Then (1) no longer follows from (R): on any sharpening of the relevant terms, there might well be a situation a_i such that at a_i one knows that one feels cold even though at a_i+1 one does not feel cold and thus, by (Inf) does not believe that one feels cold. So (R) holds, but (1) fails.

Williamson might try to block this way out by claiming that believing that one feels cold, unlike feeling cold, is not vague, but rather sharp and gradual: there is no sharpening on which one believes to feel cold at a_i and does not believe to feel cold at a_i+1; instead, on any sharpening, one believes at a_i to feel cold to degree d, and to degree d-e at a_i+1.

But that doesn't work. First, if "feeling cold" is vague, then "believing that one feels cold" must also be vague, as it non-trivially operates on the vague condition. (Compare ambiguity: since "Fred went to the bank" is ambiguous, "Jones believes that Fred went to the bank" is also ambiguous.) The degree to which one believes to feel cold at a_i depends on the sharpening of "feeling cold". We can still block Williamson's argument by replacing (Inf) with

Inf') if on a certain sharpening of "feeling cold", one feels cold at a_i but not at a_i+1, then on this sharpening, one's degree of belief that one feels cold at a_i is considerably higher than one's degree of belief at a_i+1.

To motivate (Inf'), consider a somewhat sharper version of feeling cold, say feeling cold or at least a little chilly. I assume that some borderline case of feeling cold is not a borderline case of feeling cold or at least a little chilly, whereas every borderline case of the latter is also one of the former. (In this sense, the second concept is sharper.) Now suppose at a_m, it's just beginning to be warm enough so that your degree of belief that it is cold is no longer close to 1. Then arguably, your degree of belief that it is cold or at least a little chilly is still close to 1. At the other end, when at a_n you've completely stopped to believe that it is cold, you've also stopped to believe that it is cold or at least a little chilly. So the more we sharpen feeling cold, the steeper the transition from high degree of belief to low degree of belief. Hence at the limit of a perfectly sharp concept, the degree of belief drops sharply, just as (Inf') says.

Second. In fact, I think all this talk about decreasing degrees of belief is wrong: when I'm asked if I'm feeling cold and it's only a little chilly, I hesitate not because I'm uncertain about my own feelings. It's not that there is this one proposition -- that I'm feeling cold -- of which I'm not quite sure whether it is true, in the way I'm not quite sure whether Cambodia borders Laos. My hesitation is more like my hesitation when asked, out of the blue, whether New York is bigger than London, where the answer depends on what is meant by "bigger": New York has more inhabitants, but London covers a larger area (I think). It would be at least very misleading to say that I believe to degree 0.5 that New York is bigger than London. Rather, on one precisification, I am quite sure that London is bigger; on another precification, I am quite sure that New York is bigger. Likewise, if I'm felling only slightly chilly, I don't really belief to degree 0.5 that I'm feeling cold. Rather, on one precisification, I'm sure that I feel cold whereas on another I'm sure that I do not. Hence the resort to degrees of belief doesn't work.

Comments

# on 26 July 2006, 11:40

Some of this looks like the kind of response Steup offers to Williamson in 'Are Mental States Luminous?', so you might want to check that out. The volume won't be out for a while, but there's a draft up on Steup's site: http://web.stcloudstate.edu/msteup/Luminosity.pdf

# on 26 July 2006, 17:28

I like the degrees of belief point. I tried to argue for something like Inf) in "Luminous Margins" using a somewhat different approach. (My thought was that it might be that the brain state that constituted the feeling cold might just be the brain state that constituted the believing one feels cold.) But what you say seems to require less speculative neuroscience than my approach!

# on 26 July 2006, 22:09

Thanks both of you! I only had a quick look at the papers for now, but it seems they do go in the same direction. If I had bothered to justify (Inf) at all, I might have resorted to speculative neuroscience, too. Though what I had in mind was more something like a conceptual (rather then empirical) connection between certain mental states and beliefs about them.

# trackback from on 17 August 2006, 23:08

In July, I tried to show that Williamson's argument against luminosity fails for states that satisfy a certain infallibility condition. I now think that (for basical...

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.