Reasoning V (Limited Rationality)

This is still a bit vague, but anyway.

As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 'sentences in boxes' implementation of belief-desire psychology: I have certain sentence-like items stored in my belief module, and other such items in my desire module. When I face a decision, I run a query on these modules. Suppose the question is whether I should take an umbrella with me. The decision procedure may then somehow find the sentences "It is raining" and "If I take an umbrella, I don't get wet" (or rather, their Mentalese translations) in the belief box and "I don't get wet" in the desire box. From these it somehow infers the answer, that I should take the umbrella.

I'm pretty sure my mind doesn't work even remotely like this, but no matter how it works, it it very probable that it uses some means or other to encode beliefs and desires, and some procedure or other to apply these encodings in particular situations. Using these procedures is reasoning.

Now Folk Psychology doesn't contain a theory of its own implementation. But it does account for the consequence of implementation: that we can fail to act rationally due to computational limitations or disturbing influence. We know that we are not ideally rational. We also know roughly under what conditions we are particularly prone not to act in accordance with our beliefs and desires: under time pressure and emotional stress, when we're drunk or stoned or tired, etc. (Note how strange it is to suggest that in all these circumstances, our belief system suddenly falls apart in lots of disconnected fragments.) And we know roughly what kinds of task need more processing than others: answering "what's your name?" is easier than answering "what are the prime factors of 241'815?". We also know that the processing required for many kinds of task depends on training.

Still, none of these are core components of folk psychology. The core of folk psychology is a sort of decision theory, perhaps together with the recognition that implementation of the theory underlies computational constraints. If we met a Martian who can instantly and without noticable effort find answers to complicated Maths problems, whereas he needs hours of undisturbed concentration to decide which limbs to move in ordinary situations -- so that most of the time, he just gets pushed around or moves in an incomprehensible, random way -- I wouldn't say he doesn't have any beliefs and desires at all. That is, folk psychology still applies even if the limits due to its implementation are very different.

(However, at least from a broadly functionalist perspective, there are limits to these limits: If for whatever reason the Martian never manages to act in accordance with his beliefs and desires, and if the same is true for most other members of his species, and for most of their counterparts, then he doesn't have any beliefs and desires, no matter what is written in his head.)

This is not a good way to say what I'm trying to say: "Due to various facts about implementation, some pieces of information stored in our belief system are not as easily accessible as others. Reasoning is making pieces of information accessible." First, it's misleading to speak of pieces of information here. Information is too course-grained. The very same information can be at the same time available for one action but not for another. It's also unclear what accessibility is supposed to mean here. If something like the box 'theory' above is true, we can say that answers to some mentalese queries are easier to find, and in this sense more accessible, than others. But that's armchair speculating about implementation details. The same is true for saying, as I did, that reasoning is finding new representations of old information.

If something like the current proposal works, we don't need hyperintensional content. We can stick to our favourite, course-grained theory of mental representation and still account for reasoning and failures of logical omniscience simply by noticing that employing representational states to guide behaviour is a difficult computational problem.

We're not quite finished though. We have to add a clause to the conditions for content attribution, some of which I listed yesterday,. For we want to say things like "Fred doesn't know the prime factors of 241'815", and "wo doesn't believe that M is a better move than N", even if all the other conditions for belief and knowledge are satisfied. Something like this might do: To qualify for attribution of the belief that S, a subject should be able to apply her belief state (in accordance with her desires) to S-relevant tasks without too much computational effort. I'm afraid there are no very general rules on what counts as an S-relevant task. I guess context can play a big role here, just as for what counts as "too much computational effort". For mathematical S, a relevant task is usually answering the question "S?" (and structurally very similar questions). For S = "that M is the best move in this state of a chess game", a relevant task is choosing a move in that state of a chess game.

Comments

# on 11 June 2004, 21:39

The suggested proposal worries me, especially when it is paired with a coarse-grained sets of possible worlds account of propositions.

First, it seems the account would inherit all of the difficulties of contextualism about knowledge ascriptions, and this seems best avoided.

Second, It seems that, given plausible assumptions, the account would have unacceptable consequences. Presumably, it would take very little computational effort to existentially generalize on something one believes. But then 'Hammurabi believes that there is an x such that x is the referent of 'Hesperus' and x is the referent of 'Phosphorous'' would count as true. But it isn't.

# on 12 June 2004, 12:14

I'm afraid I don't quite understand your points.

What difficulties of contextualism about knowledge ascriptions do you have in mind? If you mean difficulties like that on a contextualist view, sentences like "Fred truely said 'Jones knows' but Jones doesn't know" might be true, these don't worry me at all. For we get the same 'difficulties' for all context-dependent sentences that don't contain indexical.

As for your second worry, I'm not sure I understand the example. "There is an x such that x is the referent of 'Hesperus' and x is the referent of 'Phosphorous'" is an existential generalisation of, for instance, "Venus is the referent of 'Hesperus' and Venus is the referent of 'Phosphorus'". Is Hammurabi supposed to believe the latter but not the former? That's hard to imagine.

Even if that is so, I don't think belief ascriptions are closed even under easy deductive consequences, and I hope nothing I've suggested commits me to it.

# on 12 June 2004, 19:32

i apologize for being oblique. Suppose propositions are sets of possible worlds; then if Hammurabi believes that 'Hesperus' refers to Hesperus and 'Phosphorous' refers to Phosphorous, then Hammurabi believes that there is an x such that 'Hesperus' refers to x and 'Phosphorous' refers to x. The idea (this is old Richard-Soames stuff) is that any set of possible worlds that represents the complement clause of the former is one that represents the complement clause of the latter, so since Hammurabi believes the former, he believes the latter. But he doesn't. One way to address the problem is to fiddle with you account of belief ascription. Your proposal was: "To qualify for attribution of the belief that S, a subject should be able to apply her belief state (in accordance with her desires) to S-relevant tasks without too much computational effort." I was suggesting that a simple existential generalization would not take too much computational effort; hence, the account of belief attribution on offer does not solve the problem. That was what I had in mind.

# on 12 June 2004, 20:27

Ah, the condition about S-relevant tasks and computational effort was meant as an addition to the shopping list of vague conditions used in the previous post, not as a replacement. The complete account looks something like this:

"S believes that P" is true iff sufficiently many of the following conditions are satisfied to a sufficient degree:

1) "P" is true in all worlds to which S assigns relatively high credence (in her belief worlds, for short).

2) If it turns out that our world is any one of S's belief worlds, "P" will be regarded as true. (In 2D jargon: the belief worlds are a subset of "P"'s A-intension.)

3) S is acquainted in an ordinary and unique way with the objects mentioned in "P".

4) In S's belief worlds, the objects she is (uniquely) acquainted with in that way satisfy what the actual objects mentioned in "P" satisfy in those worlds where "P" is true.

5) In S's belief worlds, "P" (or its translation into S's public language) means roughly the same as it does in our world.

6) S is disposed to assent to "P" (or its translation) if questioned.

7) S is able to apply her belief state (in accordance with her desires) to S-relevant tasks without too much computational effort.

Anyway, "there is an x such that 'Hesperus' refers to x and 'Phosphorus' refers to x" is not an existential generalisation of "'Hesperus' refers to Hesperus and 'Phosphorus' refers to Phosphorus". Logic alone can't tell you that the latter entails the former.

# on 13 June 2004, 18:19

I wasn't claiming that 'Ham believes that there is an x such that 'Hes' refers to x and 'Phos' refers to x' was a *logical* consequence of 'Ham believes that 'Hes' refers to Hes and 'Phos' refers to Phos; rather, the claim was that on the sets-of-possible worlds account of propositions, any world in which 'Hes' refers to Hes and 'Phos' refers to Phos is also a world where the existential generalization is true. So since Ham believed the former, he must have believed the latter as well, if the sets-of-possible-worlds account of propositions is correct. But he didn't believe the latter, so the view is not correct. That was the objection I had in mind.

Insofar as I understand (1-7) (which might not turn out to be very far; I apologize if that's the case!), the account of belief ascription on offer does not clearly avoid the problem. It looks to me as if, were the problem avoided, (6) would be doing the work, since Ham would not be disposed to assent to the existential generalization. But this suggests an unattractive mis-match between belief and belief ascription; the total belief ascriptions that are true of Hammurabi are a poor indicator of his beliefs, and his beliefs are a poor indicator of what belief ascriptions will turn out being true of him. (This is perhaps acceptable to those who endorse a coarse-grained account of propositions, but one who endorses a finer-grained view seems able to preserve the intuitive link between belief and belief ascription rather easily: 's believes that p' is true iff s bears the belief relation to the proposition that p; in addition, if 'p' expresses p, and s is disposed to assent to 'p' or a suitable translation, then s believes that p. The converse should be rejected, I think.)

(6) is independently problematic, however. I think Richard (1983) convincingly presents a case in which co-referential singular terms are sometimes substitutible within propositional attitude verbs, even when the subject (sincerely, rationally) dissents from the sentence obtained via substitution.

Here is Richard's case: (My apologies if you are familiar with the case; I want to briefly lay it out so I can explain how I think it makes trouble for the views under consideration.)

Suppose A is a competent, rational adult. He both sees a woman, B, across the street in a phone booth and is talking to a woman on the phone. He does not realize that the woman that he is talking to is the woman that he sees. He notices she is in danger; he sees a steamroller bearing down on the woman?s phone booth. He waves to try to get her attention, but says nothing about the impending danger into the phone. A might sincerely utter the following:

(f) I believe that she is in danger
(g) It?s not the case that I believe that you are in danger.

This is evidence that the material that occurs within the scope of the negation in (g), and (f), differ in semantic content. But this implies that (d) differs in semantic content from (e) (with respect to the context we are considering). So, with respect to the context we are considering, ?she? and ?you? must differ in semantic content.

Richard argues that this conclusion is too hasty. (B) might sincerely utter (h):

(h) The man watching me believes that I am in danger.

This would be true, and A could report this truth via (i):

(i) The man watching you believes that you are in danger.

But, were A to utter (j), A would also express a truth (with respect to the context we are considering):

(j) I am the man watching you.

But (i) and (j) imply the negation of (g). So (g) is false with respect to the context we are considering. So co-referential singular terms are at least sometimes substitutable within the scope of a propositional attitude verb, even when the subject of the ascription would (rationally, sincerely) dissent from the sentence obtained via substitution. This suggests that (1-7) is incorrect.

Using Richard?s example, we may state a stronger version of the Hammurabi problem for the view that propositions are sets of possible worlds as follows: (This is a bit awkward, but I think it works.)

Consider (k-m):

(k) (With respect to the relevant context,) A believes that ?she? refers to her and ?you? refers to you.

(l) (With respect to the relevant context,) A believes that ?she? refers to her and ?you? refers to her.

(m) (With respect to the relevant context,) A believes that there is someone such that ?she? refers to it and ?you? refers to it.

(l) follows from (k). If the sets-of-possible-worlds view of propositions is correct, (m) follows from (l). But (m) is false while (k) is true, so (m) does not follow from (k). So a Hammurabi-type problem remains for the sets-of-possible worlds view of propositions.

# on 13 June 2004, 19:48

(1)-(7) are not meant to be individually necessary and jointly sufficient. That's why my proposal (which is mainly an elaboration of suggestions by Lewis) began with "'S believes that P' is true iff sufficiently many of the following conditions are satisfied to a sufficient degree". So I'm quite happy with violations of (6) in many cases.

The phone booth case is a good example of how difficult it can be to *say* what a subject believes even though it is rather clear what he takes the world to be like: Does A believe that B is in danger? Does A believe that you are in danger? I don't know, and I can see arguments for both answers, and both would probably be given by ordinary people. By contrast, it is fairly clear what kind of situation A believes to be in: it's a situation where he talks to one person on the phone while seeing a different person in a phone booth.

The possible worlds framework allows me to ignore the complications of belief ascription: the content of A's belief is simply the relevant class of possible situations.

If you're mainly looking for a semantics of attitude reports, we're approaching the problem from different sides. I'm mainly interested in what kind of content attitudes must have to play their psychological roles. It turns out that sets of (centered) worlds can do that job much better than, say, singular propositions.

As a semantic proposal, (1)-(7) are certainly far from elegant, and your complaints about a mismatch between beliefs and belief ascriptions are only fair. But I doubt that one can do better. I certainly don't know of any alternative that can handle all the trouble cases (without hiding behind undefined 'modes of presentations' and the like).

By the way, not only (6), but more importantly (2), (3), (4) and (7) also fail to various degrees for "Hammurabi believes that there is an x such that 'Hesperus' denotes x and 'Phosphorus' denotes x".

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.