Narrow Content Defined Widely
Some philosophers seem to believe that narrow content must be defined without resort to external objects, leaving only bizarre options like phenomenalism, conceptual role semantics and global descriptivism. But that's wrong. Narrow content can and should be defined by external causal relations just like wide content.
By narrow content, I mean a kind of mental content that doesn't much depend on the subject's environment. Completely narrow content is altogether intrinsic to the subject. But hardly anyone believes in completely narrow content. The question is whether there is an interesting kind of content shared between my intentional states and the states of my twin on twin earth -- and those of swampman and those of a brain in a vat.
One of the main motivations for postulating narrow content is the fact that I and my twin appear to have exactly the same behavioural dispositions, and that our states depend in exactly the same way on our perceptual surroundings. If we were secretly exchanged over night, nobody would notice. If two ordinary persons agree perfectly in all their behavioural dispositions and in their reactions to perceptual stimuli, there should be a good sense in which they believe alike.
This motivation presupposes that the relevant kind of narrow content is determined by causal links to external input and output (behaviour). If narrow content was independent of behavioural dispositions, it wouldn't satisfy the intuition that our common dispositions show that there must be a common intentional explanation.
So how would this kind of narrow content be defined? Let's focus on perceptual states. Suppose my perceptual state E is typically caused by elephants in my visual field. My twin on twin earth has a corresponding state E' that is typically caused by twelephants. Despite these differences in actual causes, our states agree in their causal-dispositional properties: If a twelephant occurred in my visual field, I would be in state E; if an elephant occurred in my twin's visual field, he would be in state E'. So we can define the narrow content of my visual state E to be the class of things x such that if x occurred in my visual field, it would (typically) cause E.
This is only a rough sketch of the beginning of a theory of content, but in this respect it isn't worse than most theories of wide content. One thing worth mentioning is that for a state E to represent that there is an object of kind K in one's visual field, it is not necessary for the property K to enter into causal relations. Some hold that only properties like elephant, not properties like elephant or twelephant can be causally active, so that the state E cannot be caused by the latter property. If so, that doesn't matter. It would matter only if my theory said that state E represents there being some K in one's visual field iff the property K typically causes E. But my theory doesn't say that. It says that state E represents there being some K in one's visual field iff instances of K are such that if they occurred in one's visual field, they (the instances) would typically cause E.
The current proposal delivers a shared content for me, my twin and swampman. Whether it assigns the same content to the brain in a vat depends on what "typically" means. Perhaps the brain in a vat is indeed in a state that is typically caused by elephant-like things in ones visual field. The brain just never is in such a typical situation.
One could also drop the visual field requirement and say that whatever would cause the state under appropriate conditions is an element of its representational content. Then not only elephants and twelephants, but perhaps also halluzinatory drugs and neural stimulations would be part of my state's content. My visual experience would locate me in logical space as one of the possible subjects with elephant-like experiences, either caused by elephants or twelephants or drugs or whatever. Percepetual experiences would hardly ever deceive us. Delusions and illusions would be cases of drawing the wrong conclusions from our perceptual contents. The brain in a vat for example believes, let's assume, that it is not a brain in a vat, that it is not halluzinating, that it is not deceived by an evil demon, etc. Then it gets, via its sense organs, the information that it is one of the subjects with elephant-like experiences. By its other beliefs, it rules out many of these possibilities, including the actualy one, leaving only those who have an elephant-like creature in their visual field.
I would slightly prefer keeping the visual field requirement. But it doesn't really matter. As I said, I believe the real story would be far more complicated anyway: If a perceptual state is to count as seeing an elephant (or seeing an elephant-like object), it should not only be caused in the right way. It should also lead to the right kind of behaviour. If somebody whose state is caused by an elephant in the right way does not typically behave at all as if he believed there was an elephant nearby, if in fact he vigorously denies that anything resembling an elephant is nearby and also denies that he has an impression of there being an elephant, he does not really see an elephant. Behavioural output is as important for the content of a perceptual state as perceptual input. That's why neuroscientists, when trying to find out what this or that area in the visual cortex represents, pay attention to what subjects report about their visual experiences when those areas are active.
Behavioural output is of course also an external condition on narrow content. In both cases, it is important that we ignore possibilities with different laws of nature. If the laws were different, my experience E could be systematically caused (in the right way) by carrots in my visual field, and could lead to typical carrot-belief behaviour. If we let the laws vary, we couldn't even rely on causal relations to other internal states, for those, too, depend on the external laws of nature. But that's no problem, as our narrow content isn't supposed to be completely narrow.
Very interesting entry as usual; here are some comments if you don't mind.
* You seem to suggest that both sensory input and behavioral output are external conditions on narrow content. But if a brain-in-a-vat can share its sensory input and behavioral output with us, I don?t think that these are obviously external. Perhaps they are the limiting cases of internal aspects.
* You sketch a possible definition of narrow content that contains a causal component. Is this causal component externalist or not? From reading your entry, it hard to say. This relates to the familiar debate between causal descriptivism, which is internalist, and theories invoking external causal constraints. Lewis, as far as I can see, does not invoke external causal constraints. I think he doesn?t, because invoking external causal constraints would conflict with the constitutive rationality that constrains the assignment of content to subjects (constitutive rationality concerns relations among mental states, and between mental states, sensory input and behavioral output). (See Postscript to Radical Interpretation.) I know you are not giving Lewis? account, but these points still remain.
* On a side note, one might argue (like Lewis) that there are more ways in which narrow content is not intrinsic, in addition to the one you already mention (having to do with the laws of nature). Mental states can be said to play causal roles in populations, and not just in subjects. More or less individualistic forms of functionalism can so be generated (See Postscript to Radical Interpretation). Also, some say that the assignment of content to mental states is partly determined by facts surrounding objective naturalness. These are also external.