Reasoning IV (Hyperintensional Content)

Why not simply use a notion of content on which belief isn't closed under strict implication? Then it will be much easier to say that reasoning always delivers new content.

There is no shortage of fine-grained notions of content. We could use English sentences, or classes of intensionally isomorphic sentences, or bundles of tuples of objects and properties ('singular propositions') together with modes of presentation, or whatever. The tricky part is to say what determines whether a subject has a belief or desire with such a content.

I don't say it can't be done. It just seems to me that all credible (and most incredible) accounts of mental representation do not work with a notion of content fine-grained enough to avoid the problem of reasoning. It's rather obvious for the interpretationist account. But consider instead a causal-informational account, on which, say, you believe that P iff you are in a state that somehow correlates with situations in which P. Then if every P-situation is also a Q-situation and vice versa, there can be no difference between believing P and believing Q. Even on the view that my beliefs are sentences stored in a box, we need an additional account of implicit belief, as I have so many beliefs (that 2 is even, that 4 is even, etc.) that they just don't fit in any box. Presumably an implicit belief is a belief that is somehow entailed by those written in the box, which brings back our problem (a point repeatedly made by Stalnaker).

Note that what we need is not just any theory that gives us truth-conditions for hyperintensional content attributions. We need that as well, but unlike the problem of reasoning such a theory might not require anything besides course-grained beliefs, together with facts about subjects, their relations to their environment, and the semantics of our and the subject's (public) language.

(Like this: We use many criteria for attributing belief. In easy cases, they all work together. Suppose Fred is a normal English-speaking person. In all his belief worlds, Ken Livingstone is the mayor of London; moreover, Fred is acquainted in some unique and ordinary way with Ken Livingstone and in another unique and ordinary way with London, and in all his belief worlds the person he is acquainted with in the first way is the mayor of the city he is acquainted with in the second way; also, in all Fred's belief worlds, someone called "Ken Livingstone" is mayor of a city called "London", and his position is called "mayor" -- which partly explains Fred's disposition to assent to the sentence "Ken Livingstone is the mayor of London"; and so on. Then Fred is an easy case for attributing the belief that Ken Livingstone is the mayor of London.

In less perfect situations, these criteria come apart. Perhaps Fred has wrong beliefs about Livingstone's parents and yet origin is essential. Then it's not true that in all Fred's belief worlds, Ken Livingstone is mayor of London. (Rather, in all these worlds somebody else who merely resembles Livingstone in various ways -- but not in origin -- holds that position.) Or perhaps Fred, like Pierre, is doubly acquainted with London in two quite different (but ordinary) ways, and doesn't recognize that the objects of acquaintance are identical. Or perhaps Fred wrongly believes that the word "mayor" means being the captain of a ship. Then he wouldn't say that Livingstone is mayor of London, even if in all his belief worlds he is. Things get far more complicated if Fred speaks a different language, is somewhat oddly acquainted with Livingstone and London, and has a somewhat wrong understanding of Livingstone's position. In many of these trouble cases, it gets unclear how his beliefs are to be described. The right answer then depends a lot on the context in which the attribution takes place.

The purpose of such flexible and multifarious rules of content attribution becomes apparent once you think of what it would be like to use simpler rules, say, the rule that "x believes that P" is true iff x's belief worlds are contained in the A-intension of P. It would make it very hard in a lot of cases to convey relevant information about how a subject takes the world to be.

Anyway, now it can easily happen that "x believes that P" is true and "x believes that Q" false even if P entails Q. (Example: "Fred believes he has arthritis in his thigh" and "Fred believes he has a disease of the joints in his thigh".) But that doesn't help at all to tackle the problem of reasoning. When Fred reasons, neither his belief worlds nor any of his relations to anything else need to change.)

It's obvious that one can know the rules of chess and the position of the pieces on the board without knowing that such-and-such is the best move. And to make this true it looks like we need a rather fine-grained notion of content. Right. But this is not a solution. It's the problem.

Comments

No comments yet.

# trackback from on 06 May 2004, 18:05

This is still a bit vague, but anyway. As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 's

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.