Luminosity Everywhere
In July, I tried to show that Williamson's argument against luminosity fails
for states that satisfy a certain infallibility condition. I now think that (for basically the same reason) Williamson's argument fails for any state whatsoever, including knowing something and being such that it's raining outside. (The latter of course isn't luminous, but this is not established by
Williamson's argument.)
<Update> (2006-08-18) I shouldn't post that late at night. I've updated what follows so that it makes at least superficial sense. The old version is appended below. </Update>
Williamson assumes the following safety principle (that's Williamson's "(9)" on p.128 of Knowledge and It's Limits):
Safety: For all cases A and B, if B is close to A and in A one
knows that C obtains, then in B one does not falsely believe that C
obtains.
The Safety principle demands that a case of knowledge must not be surrounded by cases of false belief. Williamson concludes that knowledge must be surrounded by true belief. This destroys luminosity. But there's another possibility: knowledge might be surrounded by disbelief; where knowledge breaks off in a Williamsonian sequence of cases, belief breaks off. What Williamson actually needs it something like this:
Padding: For all cases A and B, if B is close to A and in A one
knows that C obtains, then in B one truly believes that C obtains.
But this doesn't sound very convincing. Admittedly, it isn't obviously false, but if one has to choose between luminosity and Padding, I think it is often sensible to reject Padding.
I'll illustrate with Williamson's argument against KK. To simplify the discussion, I will assume (or pretend), first, that all our words have sharp
boundaries, and second, that belief and knowledge are all-or-nothing properties and not quantities. I've argued in the other post that both assumptions
are inessential to the argument. (Williamson would
agree about the first, but not about the second.)
Suppose you're looking at a tree that is 666 inches tall,
but you don't know that. You know that the tree is not 6 inches tall,
of course, but you don't know that it is not 665 inches
tall. Somewhere between these cases it is unclear whether you should
count as knowing that the tree is not so-and-so tall. Since our language is
perfectly sharp, there is some determinate number n such that
you know that the tree is not n inches tall, but you do not know that
it is not n+1 inches tall. Let's assume n = 357.
That is situation 1. For situation 2, assume you're looking at a
very similar tree in a very similar surrounding that is only 665
inches tall. Again, there is some number n such that you know that the
tree is not n inches tall, but you do not know that it is not n+1
inches tall. This time, let's assume, n = 356. (If the tree gets smaller,
clearly n should get smaller, too; it doesn't matter to what extent.)
By assumption, you know in situation 1 that the tree in front of
you is not 357 inches tall. Situation 2 is a very similar situation,
but your belief is still true there, so your knowledge is not
threatened by Safety. But by the KK principle, you know that
you know that the tree is not 357 inches tall. And situation 2 is a
very similar situation where that belief is false. Hence,
Williamson concludes, you don't know that you know that the tree is
not 357 inches tall in situation 1; the KK principle is refuted.
The crucial premise is that you still have the relevant
belief in situation 2: that you still believe that you know
that the tree is not 357 inches tall. This would mean that your alleged
second-order knowledge is surrounded by false belief. But it could
also be surrounded by disbelief.
Let's assume you not only know that the tree is not 357 inches tall in situation 1, but also believe that you know that. You certainly do believe this in situation -1000, where the tree is 1666 inches tall, and you don't believe it in situation 300, where the tree is 366 inches tall. Somewhere in between, you switch from belief to disbelief. Williamson assumes that this switch cannot take place at the same point at which you switch from knowing that you know that the tree is not 357 inches tall to not knowing that you know that the tree is not 357 inches tall, so that if you have both the knowledge and the belief in situation 1, you still have the belief in situation 2, where you lack the knowledge. This is the Padding assumption. If it is false, the two points may well coincide.
(Williamson's argument against KK looks rather different from how I've
presented it. His argument starts out from the innocent-sounding (1):
for any i, you know that if the tree is i+1 inches tall, then you
don't know that the tree is not i inches tall. Take i = 356. The claim
is that (in situation 1) you know that if the tree is 357 inches tall,
then you don't know that the tree is not 356 inches tall. "if ... then
..." is the material conditional here. So what you're supposed to know
is that either the tree is not 357 inches tall or you don't know that
the tree is not 356 inches tall. But, the argument continues,
you know that the tree is not 356
inches tall; so if you know what you know (the KK principle), you can
conclude that the tree is not 357 tall, which, by assumption you know
not.
A friend of KK should reject (1): your knowledge rules out that
you know that the tree is not 356 inches tall; it doesn't rule out
that the tree is 357 inches tall; why should it rule out the
conjunction of these two claims? Because otherwise, Williamson argues,
your knowledge wouldn't be safe. But why? Surely, if the tree
really were only 357 inches tall, your belief that it is not
356 inches tall would be just a lucky guess. But that's irrelevant, as
the relevant conditional here is material, not counterfactual. Your
knowledge also isn't threatened by close possibilities
where you falsely believe that the tree is not 356 inches
tall. As the tree is 666 inches tall, all such possibility are quite remote.
No, your knowledge is supposed to be unsafe
because there are nearby possibilities where you falsely believe that
you know that the tree is not 356 inches tall. But if Padding is false,
there may well be no such possibilities.)
Here is the old version that made even less sense.
In July, I tried to show that Williamson's argument against luminosity fails
for states that satisfy a certain infallibility condition. I now think that (for basically the same reason) Williamson's argument fails for any state whatsoever, including knowing something and being such that it's raining outside. (The latter of course isn't luminous, but this is not established by
Williamson's argument.)
The reason is this:
Suppose you acquire a belief that p by some reliable method or
process M, thereby obtaining knowledge that p. Then
there is no very similar situation where p is false and
where you nevertheless acquire the belief that
p via M. (Otherwise M isn't that reliable after all
and your belief in the original situation is just true by luck.)
Let's call this principle "Frederic Chopin". It probably isn't quite correct
as it stands, and one should say more about the standards that make situations "very
similar" to one another here. But anyway, it seems to me that something in
the vicinity of Frederic Chopin may well be true.
If that is so, then Williamson's argument fails.
I'll illustrate with two cases, the feeling cold case and the KK case. To simplify the discussion, I will make two bold assumptions. First,
I'll assume (or rather pretend) that all our words have sharp
boundaries. For instance, I'll assume that when you're feeling cold and then it gets
warmer very slowly, there is some determinate point at which you
switch from feeling cold to not feeling cold. Second, I'll
ignore degrees of belief and knowledge: I'll use "belief" and
"knowledge" as all-or-nothing terms (with sharp boundaries, by the
first assumption). I've argued in the other post that both assumptions
are inessential to the argument I'm going to make. (Williamson would
agree about the first, but not about the second.)
So here's how Frederic Chopin blocks Williamson's argument about
feeling cold.
Suppose you're in a situation (situation 1) very close to the
border of feeling cold: You're still feeling cold, but there is a very similar situation (situation 2) where it's just a little warmer and where you're no
longer feeling cold. Do you know that you're feeling cold in situation
1?
No, says Williamson, because knowledge requires safety from error:
Safety: For all cases A and B, if B is close to A and in A one
knows that C obtains, then in B one does not falsely believe that C
obtains.
(That's Williamson's "(9)" on p.128 of Knowledge and It's Limits.)
Since you also believe that you're feeling cold in situation 2,
Williamson claims, Safety entails that you do not know you're
feeling cold in situation 1. Hence feeling cold is not
luminous: it is not the case that whenever one feels cold, then one
knows (or is in a position to know) that one feels cold.
Let's grant Safety. The crucial premise is that you still
believe to feel cold in situation 2. That might be false.
Remember that all our words have
perfectly sharp boundaries. So when it slowly gets warmer, there is
some precise point at which you switch from believing to feel cold to
not believing to feel cold, just as there is some precise point at
which you switch from feeling cold to not feeling cold. If these two points coincide, Williamson's argument fails.
In the other post I've argued that the two points coincide if
feeling cold satisfies a certain infallibility condition: if one
cannot (non-inferentially) believe to feel cold unless one really feels
cold. Then it follows that you cannot believe to feel cold in situation 2.
I've added "non-inferentially" because even if there
are possible cases where people believe to feel cold even though they
don't feel cold -- say, because they believe everything Reverend Moon
says and Reverend Moon falsely told them that they feel cold --, these cases are irrelevant, because situation 2 is
not one of them: situation 2 is supposed to be just like situation 1,
only a little warmer. And in situation 1, you don't believe that
you're feeling cold because of Reverend Moon, otherwise your belief
wouldn't have been knowledge back then.
The weakened infallibility condition is still unnecessarily strong. To block Williamson's argument, all we need is a reason to doubt that in situations like situation 2, you can false believe to feel cold.
Frederic Chopin provides such a reason: he it says,
recall, that if in some situation you acquire some belief via some
method, thereby obtaining knowledge, then there is no very similar
situation where you acquire the same belief via the same method but
where that belief is false. Situation 1 is a situation of the first
kind: you somehow come to believe that you're feeling cold and this
belief counts as knowledge. And situation 2, Williamson assumes, is of
the second, impossible kind: a very similar situation where you still
come to believe that you're feeling cold in much the same way (not via
Reverend Moon, say), but where that belief is false.
Next, the KK principle:
Suppose you're looking at a tree that is 666 inches tall,
but you don't know that. You know that the tree is not 6 inches tall,
of course, but you don't know that it is not 665 inches
tall. Somewhere between these cases it is unclear whether you should
count as knowing that the tree is not so-and-so tall. But our language is
still perfectly sharp. So there is some determinate number n such that
you know that the tree is not n inches tall, but you do not know that
it is not n+1 inches tall. Let's assume n = 357.
That is situation 1. For situation 2, assume you're looking at a
very similar tree in a very similar surrounding that is only 665
inches tall. Again, there is some number n such that you know that the
tree is not n inches tall, but you do not know that it is not n+1
inches tall. This time, let's assume, n = 356. (If the tree gets smaller,
clearly n should get smaller, too; it doesn't matter to what extent.)
By assumption, you know in situation 1 that the tree in front of
you is not 357 inches tall. Situation 2 is a very similar situation,
but your belief is still true there, so your knowledge is not
threatened by Safety. But by the KK principle, you know that
you know that the tree is not 357 inches tall. And situation 2 is a
very similar situation where that belief is false. Hence,
Williamson concludes, you don't know that you know that the tree is
not 357 inches tall in situation 1; the KK principle
fails.
Again, the crucial premise is that you still have the relevant
belief in situation 2: that you still believe that you know
that the tree is not 357 inches tall. Frederic Chopin refutes that
assumption: by whatever means you believe that you know that the tree
is not 357 inches tall, that belief cannot be knowledge (which by
assumption it is) if there is a very similar situation where you
acquire the same belief in the same way (as opposed to via Rev. Moon)
but where the belief is false.
(Williamson's argument against KK looks rather different from how I've
presented it. His argument starts out from the innocent-sounding (1):
for any i, you know that if the tree is i+1 inches tall, then you
don't know that the tree is not i inches tall. Take i = 356. The claim
is that (in situation 1) you know that if the tree is 357 inches tall,
then you don't know that the tree is not 356 inches tall. "if ... then
..." is the material conditional here. So what you're supposed to know
is that either the tree is not 357 inches tall or you don't know that
the tree is not 356 inches tall. But, the argument continues,
you know that the tree is not 356
inches tall; so if you know what you know (the KK principle), you can
conclude that the tree is not 357 tall, which, by assumption you know
not.
A friend of KK should reject (1): your knowledge rules out that
you know that the tree is not 356 inches tall; it doesn't rule out
that the tree is 357 inches tall; why should it rule out the
conjunction of these two claims? Because otherwise, Williamson argues,
your knowledge wouldn't be safe. But why? Surely, if the tree
really were only 357 inches tall, your belief that it is not
356 inches tall would be just a lucky guess. But that's irrelevant, as
the relevant conditional here is material, not counterfactual. Your
knowledge also isn't threatened by close possibilities
where you falsely believe that the tree is not 356 inches
tall. As the tree is 666 inches tall, all such possibility are quite remote.
No, your knowledge is supposed to be unsafe
because there are nearby possibilities where you falsely believe that
you know that the tree is not 356 inches tall. But by Frederic Chopin,
there are no such possibilities. Hence there is no reason
to uphold (1) for i = 356.)
Isn't it just a stipulated feature of the series of cases that one is almost as confident in n+1 that the relevant condition obtains as one was in case n (for example, the sixth sentence of the first full paragraph on p97)?
In that case does Williamson really have to worry about a possible sudden change from belief to disbelief? He just needs to describe the Sorites series of cases so that his stipulation that one's confidence won't sharply diminish is plausible.
Steup has some interesting things to say on this issue in the paper I linked to before.