Luminosity Everywhere

In July, I tried to show that Williamson's argument against luminosity fails for states that satisfy a certain infallibility condition. I now think that (for basically the same reason) Williamson's argument fails for any state whatsoever, including knowing something and being such that it's raining outside. (The latter of course isn't luminous, but this is not established by Williamson's argument.)


<Update> (2006-08-18) I shouldn't post that late at night. I've updated what follows so that it makes at least superficial sense. The old version is appended below. </Update>

Williamson assumes the following safety principle (that's Williamson's "(9)" on p.128 of Knowledge and It's Limits):

Safety: For all cases A and B, if B is close to A and in A one knows that C obtains, then in B one does not falsely believe that C obtains.

The Safety principle demands that a case of knowledge must not be surrounded by cases of false belief. Williamson concludes that knowledge must be surrounded by true belief. This destroys luminosity. But there's another possibility: knowledge might be surrounded by disbelief; where knowledge breaks off in a Williamsonian sequence of cases, belief breaks off. What Williamson actually needs it something like this:

Padding: For all cases A and B, if B is close to A and in A one knows that C obtains, then in B one truly believes that C obtains.

But this doesn't sound very convincing. Admittedly, it isn't obviously false, but if one has to choose between luminosity and Padding, I think it is often sensible to reject Padding.

I'll illustrate with Williamson's argument against KK. To simplify the discussion, I will assume (or pretend), first, that all our words have sharp boundaries, and second, that belief and knowledge are all-or-nothing properties and not quantities. I've argued in the other post that both assumptions are inessential to the argument. (Williamson would agree about the first, but not about the second.)

Suppose you're looking at a tree that is 666 inches tall, but you don't know that. You know that the tree is not 6 inches tall, of course, but you don't know that it is not 665 inches tall. Somewhere between these cases it is unclear whether you should count as knowing that the tree is not so-and-so tall. Since our language is perfectly sharp, there is some determinate number n such that you know that the tree is not n inches tall, but you do not know that it is not n+1 inches tall. Let's assume n = 357.

That is situation 1. For situation 2, assume you're looking at a very similar tree in a very similar surrounding that is only 665 inches tall. Again, there is some number n such that you know that the tree is not n inches tall, but you do not know that it is not n+1 inches tall. This time, let's assume, n = 356. (If the tree gets smaller, clearly n should get smaller, too; it doesn't matter to what extent.)

By assumption, you know in situation 1 that the tree in front of you is not 357 inches tall. Situation 2 is a very similar situation, but your belief is still true there, so your knowledge is not threatened by Safety. But by the KK principle, you know that you know that the tree is not 357 inches tall. And situation 2 is a very similar situation where that belief is false. Hence, Williamson concludes, you don't know that you know that the tree is not 357 inches tall in situation 1; the KK principle is refuted.

The crucial premise is that you still have the relevant belief in situation 2: that you still believe that you know that the tree is not 357 inches tall. This would mean that your alleged second-order knowledge is surrounded by false belief. But it could also be surrounded by disbelief.

Let's assume you not only know that the tree is not 357 inches tall in situation 1, but also believe that you know that. You certainly do believe this in situation -1000, where the tree is 1666 inches tall, and you don't believe it in situation 300, where the tree is 366 inches tall. Somewhere in between, you switch from belief to disbelief. Williamson assumes that this switch cannot take place at the same point at which you switch from knowing that you know that the tree is not 357 inches tall to not knowing that you know that the tree is not 357 inches tall, so that if you have both the knowledge and the belief in situation 1, you still have the belief in situation 2, where you lack the knowledge. This is the Padding assumption. If it is false, the two points may well coincide.

(Williamson's argument against KK looks rather different from how I've presented it. His argument starts out from the innocent-sounding (1): for any i, you know that if the tree is i+1 inches tall, then you don't know that the tree is not i inches tall. Take i = 356. The claim is that (in situation 1) you know that if the tree is 357 inches tall, then you don't know that the tree is not 356 inches tall. "if ... then ..." is the material conditional here. So what you're supposed to know is that either the tree is not 357 inches tall or you don't know that the tree is not 356 inches tall. But, the argument continues, you know that the tree is not 356 inches tall; so if you know what you know (the KK principle), you can conclude that the tree is not 357 tall, which, by assumption you know not.

A friend of KK should reject (1): your knowledge rules out that you know that the tree is not 356 inches tall; it doesn't rule out that the tree is 357 inches tall; why should it rule out the conjunction of these two claims? Because otherwise, Williamson argues, your knowledge wouldn't be safe. But why? Surely, if the tree really were only 357 inches tall, your belief that it is not 356 inches tall would be just a lucky guess. But that's irrelevant, as the relevant conditional here is material, not counterfactual. Your knowledge also isn't threatened by close possibilities where you falsely believe that the tree is not 356 inches tall. As the tree is 666 inches tall, all such possibility are quite remote. No, your knowledge is supposed to be unsafe because there are nearby possibilities where you falsely believe that you know that the tree is not 356 inches tall. But if Padding is false, there may well be no such possibilities.)


Here is the old version that made even less sense.

Comments

# on 18 August 2006, 17:53

Isn't it just a stipulated feature of the series of cases that one is almost as confident in n+1 that the relevant condition obtains as one was in case n (for example, the sixth sentence of the first full paragraph on p97)?

In that case does Williamson really have to worry about a possible sudden change from belief to disbelief? He just needs to describe the Sorites series of cases so that his stipulation that one's confidence won't sharply diminish is plausible.

Steup has some interesting things to say on this issue in the paper I linked to before.

# on 20 August 2006, 20:10

hi, yes, I'm presupposing here that there is a sudden change from belief to disbelief, relying on the reasoning of the previous entry to show that this makes no difference. But I really should write up all this again somewhat more transparently.

I don't think Steup's argument that there are plausibly no unrecognizable changes in one's mental states suffices to undermine Williamson's argument. I suppose Williamson could agree that whenever one feels cold a little less, one is aware of this and hence one's confidence in the claim that one feels cold decreases by a small amount. The argument for (I_i) (Steup's (R)) would then go like this:

Suppose one knows that one feels cold in A_i. Then one beliefs that one feels cold to a certain high degree d in A_i. Then one beliefs that one feels cold to a very similar degree d-e in A_{i+1}. But safety requires that if one knows that p in A_i, then p is true in every similar case in which one beliefs that p to a similar degree as in A_i. Hence one feels cold in A_{i+1}.

Some of his remarks in ch.5 support this reading, see esp. p.124: "one avoids false belief reliably in A iff one avoids false belief in every case similar to A. When the danger [of false belief] is a matter of degree, reliability involves a trade-off between the degree to which the danger is realized and the closeness of the case in which it is realized. [,,,] The argument of section 4.3 involved such a trade-off, the closeness of the case A_{i+1} compensating for the slightly lower degree of belief in A_{i+1}."

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.