On one of our many conceptions of meaning, the meaning of an expression is what you know when you know the meaning of the expression. I don't think this is a particularly useful conception. Besides, it violates some commonplace truths about meaning, like that expressions of different languages can have the same meaning. For suppose the meaning of the German "schwarz" is identical to the meaning of the English "black". Then by the above rule anyone who knows the meaning of "black" should know the meaning of "schwarz", which isn't so.
A sentence is true in a fiction iff it is true at certain worlds, say, at the closest worlds where the pretense which the narrator and the audience engage in is not only pretense. But to evaluate whether the sentence is true at a world, do we treat the world as actual or as counterfactual?
It seems that there could easily be stories in which water isn't H20, and Hesperus isn't Phosphorus. This suggests that the worlds must be treated as actual. However, it isn't clear that these terms ("water" etc.) are sufficiently rigid, and if not, there are also worlds as counterfactual where the identities fail. Could there be a story in which the stuff that actually is water isn't the stuff that actually is H2O? I'm not sure.
"Dynamical basis of intentions and expectations in a simple neuronal network" (PNAS subscription required, there's a free abstract):
[R]ecent indirect evidence suggests that intentions and expectations may arise in behavior-generating networks themselves even in primates [...]. In that case, interestingly, the intentions and expectations inferred from behavioral observations are not always identical to the intentions and expectations that are consciously accessible [...]. In this study we have demonstrated how such intentions and expectations arise automatically in the feeding network of Aplysia.
The "intentions and expectations" found are basically this: if you repeatedly present an A-stimulus to one of Aplysia's central pattern generators, and then switch to a B-stimulus, the pattern generator will respond as if it received another A-stimulus. Only after several B-stimuli will it switch to responses adequate for B. In this sense the animal expects to receive further A-stimuli, and intends to produce further A-behaviour. In a similar, slightly strechted, sense one could say that the animal believes to be in an A-environment (which is an environment containing seaweed). This belief is a certain state of the synapse linking Aplysia's neurons B20 and B8.
I'm back. Here's a question that occurred to me while I was listening to Dave Chalmers's talk on scrutability.
First some background. One might think that for every world w there is a complete description D true at w such that all and only the sentences true at w follow a priori from D: simply let D contain all sentences true at w. Then all sentences true at w will be a priori entailed by D. However, if "true at" is read counterfactually, sometimes sentences false at w will also be so entailed. Consider Twin World where XYZ occupies the water role. "Water doesn't occupy the water role" is true at Twin World. But "water occupies the water role" is a priori, and hence a priori entailed by everything1. Thus every complete description of Twin World a priori entails a contradiction (and every sentence whatever).
For the next few days I'll be in Konstanz at the 'Concepts and the A Priori' conference. I'll probably stay a bit longer in southern Germany and Switzerland to visit some friends and relatives and mountains before I return to Berlin in about a week ot two.
If the individuation of mental states depends at least partly on their causal roles, then it depends on the laws of nature (including possibly psychophysical laws). For if the laws differ between world 1 and world 2, a state with a given intrinsic nature can have causal role R in world 1 but lack R in world 2.
Assume world 1 is our world and world 2 is a world that contains a perfect spatiotemporal duplicate of our galaxy but lots of weird things elsewhere that contradict our laws. So the laws of world 2 are not the laws of our world. Then our duplicates in world 2 could have quite different mental states than we do.
But that sounds strange. I would have thought that my mental states do not depend upon what goes on outside the milkyway. We might also get the externalist problem about self-knowledge: If whether I believe P or Q depends on far away events, how can I know I believe P rather than Q if I don't know about these far away events?
Jonathan Schaffer argues (in Analysis 2001) that Relevant Alternatives Theories of knowledge (RATs) such as Lewis's fail because of Missed Clues cases:
Professor A is testing a student, S, on ornithology. Professor A shows S a goldfinch and asks, 'Goldfinch or canary?' Professor A thought this would be an easy first question: goldfinches have black wings while canaries have yellow wings. S sees that the wings are black (this is the clue) but S does not appreciate that black wings indicate a goldfinch (S misses the clue). So S answers, 'I don't know'.
We want to say that S doesn't know that the bird is a goldfinch. Yet it seems that S's evidence rules out all relevant alternatives. For situations with goldfinch-perceptions but no goldfinches are skeptical scenarios and usually regarded as irrelevant.
The computer is working again. Now I have to catch up with 200 non-junk mails.
My computer is currently broken. I hope to get it running again sometime next week. Until then I won't be reading emails very regularly.
It is widely assumed that Lewis takes the objective naturalness of
semantic values to be an important constraint on semantics, needed to
prevent radical indeterminacy of meaning. On rereading some of his
remarks today, I found them a little confusing, and now I think the
situation is far more complicated.
Lewis discusses Putnam's model theoretic argument for radical
indeterminacy extensively in "New work for a theory of universals"
(NW) and "Putnam's paradox" (PP). In both papers, he says there is
something wrong with posing the problem as a problem about language,
because in fact the interpretation of language is settled by the
assignment of content to propositional attitudes (NW 49, PP
58f.). But, he says, focussing on attitudes only relocates the problem
without solving it, so that he might as well talk about language in
the rest of PP, which he does. He points at NW for a discussion of the
properly relocated problem.