Not Broken
The computer is working again. Now I have to catch up with 200 non-junk mails.
< 272 older entries | Home | 510 newer entries > |
The computer is working again. Now I have to catch up with 200 non-junk mails.
My computer is currently broken. I hope to get it running again sometime next week. Until then I won't be reading emails very regularly.
It is widely assumed that Lewis takes the objective naturalness of semantic values to be an important constraint on semantics, needed to prevent radical indeterminacy of meaning. On rereading some of his remarks today, I found them a little confusing, and now I think the situation is far more complicated.
Lewis discusses Putnam's model theoretic argument for radical indeterminacy extensively in "New work for a theory of universals" (NW) and "Putnam's paradox" (PP). In both papers, he says there is something wrong with posing the problem as a problem about language, because in fact the interpretation of language is settled by the assignment of content to propositional attitudes (NW 49, PP 58f.). But, he says, focussing on attitudes only relocates the problem without solving it, so that he might as well talk about language in the rest of PP, which he does. He points at NW for a discussion of the properly relocated problem.
I've thought a bit more about the comments Michael Fara left last week, and I don't find my points very convincing any more. The following is partly a correction, but mostly just thinking out loud about a more general semantic question.
The general question is how to interpret sentences of the form
1) At i, A is F
2) At i, A is not F
where 'i' denotes something like a time or a place or a world. There are a dozen proposals for interpretations of (1) in the temporal case, invoking temporal parts or relations to times or whatever. Most of these proposals can be applied to other indices as well. But let's put that aside. Suppose we understand how to interpret instances of (1) in easy cases. The hard cases I have in mind are cases where A doesn't exist exactly once at i. The precise definition of these cases depends on the question I've put aside, but I hope it is reasonably clear what I mean anyway. Not existing exactly once at i means either not existing at i at all, or multiply existing at i. Plausible examples of the first kind: I do not exist in 1758; I do not exist on Alpha Centauri; I do not exist at any world containing only empty space-time. Controversial examples of the second kind: if I get split into two persons tonight, I will doubly exist tomorrow; if river R has two branches where is crosses the border to country C, R doubly exists at the border to C; if at some world, two people resemble me to exactly the same degree in all extrinsic and intrinsic respects, I doubly exist at that world.
I have a problem with the new, free-variables powered version of my tree prover (only existing here on my hard disk at the moment): It doesn't terminate on some valid formulas, at least not within reasonable time.
A very nice feature of ordinary tableaux is that there is a mechanical procedure for building a ("canonical") tableau that will always close as long as there is any closed tableau for the input formula. To my knowledge, no such procedure has been found for free-variable tableaux. The problem, I think, is to decide at each stage whether to apply an ordinary expansion rule or the Closure rule. For many trees, it is best to apply Closure after every expansion. But for some formulas, this procedure will leave the tree forever open. The common response to this problem is apparently to try out all possible decisions at every point, using backtracking and iterated deepening of the search space.
This is still a bit vague, but anyway.
As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 'sentences in boxes' implementation of belief-desire psychology: I have certain sentence-like items stored in my belief module, and other such items in my desire module. When I face a decision, I run a query on these modules. Suppose the question is whether I should take an umbrella with me. The decision procedure may then somehow find the sentences "It is raining" and "If I take an umbrella, I don't get wet" (or rather, their Mentalese translations) in the belief box and "I don't get wet" in the desire box. From these it somehow infers the answer, that I should take the umbrella.
Why not simply use a notion of content on which belief isn't closed under strict implication? Then it will be much easier to say that reasoning always delivers new content.
There is no shortage of fine-grained notions of content. We could use English sentences, or classes of intensionally isomorphic sentences, or bundles of tuples of objects and properties ('singular propositions') together with modes of presentation, or whatever. The tricky part is to say what determines whether a subject has a belief or desire with such a content.
As Robbie Williams remarked in the comments, perhaps what we do when we reason is putting parts of our fragmented belief space together. However, I doubt that this will do as a general solution.
First, at least in the context of an interpretationist account of content, it doesn't suffice for fragmentation that the relevant beliefs are somehow stored in different parts of the brain. Rather, if my beliefs are fragmented, say, into a compartment in which I believe P and one in which I don't, this must show up in my behaviour, more or less as follows: 1) In some contexts, the best explanation of some of my actions involves the assumption that I take the world to be P; but also 2) in some contexts, the best explanation of some of my actions involves the assumption that I don't take the world to be P; Moreover, 3) the discrepancy can't be explained as a change of belief.
One might suggest that in fact resoning, like (factual) learning, always means acquiring new information. After all, it is possible to acquire new information by learning that P even if what one previously knew already entailed P. In this case the new information can't be P, but it can be something else. To use Robert Stalnaker's favourite example. when you learn that all ophtalmologists are eye-doctors, the possibilities you can thereby exclude are not possibilities where some ophtamologists aren't eye-doctors -- there are no such possibilites. Rather, they are possibilities where "ophtalmologist" means something different. You've acquired information about language. Perhaps what you learn when you learn that the square root of 1156 is 34 is similarly something about language, in this case about mathematical expressions. That explains why we can't replace synonymous expressions in the content attribution: Just as it would be wrong to say you've learned that all eye-doctors are eye-doctors, so here it would be wrong to say you've learned that the square root of 34*34 is 34.
What do we do when we draw inferences? We don't acquire new information, at least not if the reasoning is deductively valid. Rather, we try to find new representations of old information. The point of that is perhaps that we can only make our actions depend on representations of information, not directly on the information itself, and some forms of representation lend themselves more easily to guide certain actions than others.
The problem is familiar in programming: to accomplish a given task it is often crucial to find a data structure that makes the relevant properties of the stored data easily accessible. In principle, every data set could be represented as a huge number, but in pratice it helps a lot to represent it in terms of arrays or strings or objects with suitable properties.
< 272 older entries | Home | 510 newer entries > |