I had another look at Lewis's trust condition on linguistic conventions. It says that the members of a linguistic community generally take utterances of a sentence as evidence that the sentence is true. My opinion up to now has been that insofar as this condition is correct, it is redundant, and insofar as it is not redundant, it is incorrect.
The condition seems mostly redundant because the convention of truthfulness already requires of everyone to impute truthfulness to others. To be truthful means to try to utter sentences only when they are true. So by partaking in the convention of truthfulness in English, I already expect you to utter "it's raining" only when you believe that it's raining. So unless I believe your opinions about the weather are unreliable, I will take your utterance as evidence for rain. No need for an additional convention of trust.
I'm away in the mountains (without internet access) for about a weak. Happy Christmas, Hanukkah, Kwanzaa or whatever else you celebrate, and Happy Ordinary Days if you don't celebrate anything.
The fundamental properties provide a minimal basis for all intrinsic qualities of things. That is, whenever two things are not perfect qualitative duplicates, they differ in the distribution of fundamental properties over their parts; whenever two things do not differ by that distribution, they are perfect qualitative duplicates. It follows that all fundamental properties are intrinsic. But not all intrinsic properties are fundamental: the fundamental properties provide a minimal basis for all qualitaties. Hence there is no fundamental property of having a mass of either 1g or 2g, because instantiation of that property is already determined by the distribution of mass 1g and mass 2g. For the same reason, there is no fundamental property of being the fusion of a round thing and a distinct rectangular thing. By and large, fundamental properties are never logically complex (like A or B) and never structural (determined by the distribution of properties over the parts of their instances).
In August, I posted an argument purportedly showing that if it is
common knowledge within a linguistic community that everyone refers to
the same thing by some name N, then the descriptions individuals
associate with that name can only differ for very remote
possibilities. The argument went like this:
If we know something, it holds in all possible situations that
might, for all we know, be actual. So if we know that our terms
corefer, they do corefer in all situations that might, for all we
know, be actual. And if I know that you know that our terms corefer, they
do also corefer in all situations that might, for all I know, be
situations that might, for all you know, be actual. And if I
know that you know that I know that our terms corefer, they do also
corefer in situations I believe you might believe I might believe
to be actual. And so on. In conclusion, our terms corefer in all
situations that have some chance of being believed (or believed to be
believed, etc.) to be actual in our community. So if we consider the
corresponding functions from possible situations to extensions, our
idiosyncratic functions will only differ for quite remote
possibilities.
There must be something wrong with this argument, for its conclusion is
false. Suppose the description you associate with "quicksand" is
"a bed of loose sand mixed with water forming a soft shifting mass that yields easily to pressure and tends to engulf any object resting on its surface", whereas what I associate with the term is "what you call 'quicksand'". Suppose also it is common knowledge between us that that's the description I associate. So it is common knowledge between us that our descriptions pick out the same stuff. But clearly, I do
not know what kind of phenomenon "quicksand" refers to. That's
why I don't know how to behave when you tell me that there's quicksand
nearby. For all I know, you could be telling me that there's watery
stuff nearby (and mean watery stuff by "quicksand") or that there are houses
nearby (and mean houses by "quicksand"), and so on.
So here's the thesis (PDF, 250 pages, 1.7 MB and in German). I'm a little dissatisfied with the presentation: it shows that it was finished in a hurry. I will do some polishing before the obligatory publication, and I'd recommend not reading it through in its current state.
For the most part, the book is an overview of Lewis's philosophy, with an emphasis on metaphysics. I discuss Lewis's views on non-present times and non-actual worlds, on mathematics and properties, his physicalism and Humean Supervenience, and the basic framework of his philosophy of language. One aim of this is to ease the understanding of Lewis's positions by tracing out all the interconnections between his theories. More importantly, I try to show how the package can be broken up: that one can, for example, accept most of what Lewis says about language and laws of nature without accepting his modal realism and his doctrine of objective naturalness. There's also a rather lengthy discussion about methodology and the relationship between modal/metaphyical and analytical reduction.
I've posted most of the interesting bits in this weblog here while I was working on them, and I'll probably write two or three small papers (in English) about some of it in the coming months.
In the latest issue of PNAS, there's an article on blindsight in ordinary people: The researchers induced local and temporary blindness by magnetically de-activating certain parts of brain area V1. When forced to choose, the subjects then often guessed correctly the direction or colour of a patch which they didn't consciously see.
(As usual, "consciously" is here functionally defined: what the subjects were missing is not some kind of non-functional, phenomenal consciousness, but a state with a certain functional role, leading in particular to utterances like "I saw a yellow patch". A case of truly phenomenal blindsight would be somebody who behaves in every way as if she consciously sees the patches, but who nevertheless doesn't see them consciously.)
What does it take for something to be a perfectly reliable indicator
of something else?
I'm not really familiar with discussions of reliability in epistemology, and I'd be grateful for pointers. Anyway, here is my own suggestion.
First, we need a mapping from (possible) states of the indicator to
the indicated facts (or states or propositions).
Let's say that the indicator displays that p, for short: I(p), if its state is mapped to p by that mapping. The mapping may
be any old function (but the 'states' may not be any old Cambridge
states): there is a good sense in which a clock that consistently runs 8 minutes fast is reliable; the tricky bit is only to read what it says, to figure out
the mapping. This is the sense of "reliable" I'm interested in.
Sunrise from my office window. Til September 2006, I now work as a lecturer in philosophy at the
University of Bielefeld.
I've moved around some things here on the blog. This ought to show up as a smaller note. Let's see.