Reasoning V (Limited Rationality)
This is still a bit vague, but anyway.
As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 'sentences in boxes' implementation of belief-desire psychology: I have certain sentence-like items stored in my belief module, and other such items in my desire module. When I face a decision, I run a query on these modules. Suppose the question is whether I should take an umbrella with me. The decision procedure may then somehow find the sentences "It is raining" and "If I take an umbrella, I don't get wet" (or rather, their Mentalese translations) in the belief box and "I don't get wet" in the desire box. From these it somehow infers the answer, that I should take the umbrella.
I'm pretty sure my mind doesn't work even remotely like this, but no matter how it works, it it very probable that it uses some means or other to encode beliefs and desires, and some procedure or other to apply these encodings in particular situations. Using these procedures is reasoning.
Now Folk Psychology doesn't contain a theory of its own implementation. But it does account for the consequence of implementation: that we can fail to act rationally due to computational limitations or disturbing influence. We know that we are not ideally rational. We also know roughly under what conditions we are particularly prone not to act in accordance with our beliefs and desires: under time pressure and emotional stress, when we're drunk or stoned or tired, etc. (Note how strange it is to suggest that in all these circumstances, our belief system suddenly falls apart in lots of disconnected fragments.) And we know roughly what kinds of task need more processing than others: answering "what's your name?" is easier than answering "what are the prime factors of 241'815?". We also know that the processing required for many kinds of task depends on training.
Still, none of these are core components of folk psychology. The core of folk psychology is a sort of decision theory, perhaps together with the recognition that implementation of the theory underlies computational constraints. If we met a Martian who can instantly and without noticable effort find answers to complicated Maths problems, whereas he needs hours of undisturbed concentration to decide which limbs to move in ordinary situations -- so that most of the time, he just gets pushed around or moves in an incomprehensible, random way -- I wouldn't say he doesn't have any beliefs and desires at all. That is, folk psychology still applies even if the limits due to its implementation are very different.
(However, at least from a broadly functionalist perspective, there are limits to these limits: If for whatever reason the Martian never manages to act in accordance with his beliefs and desires, and if the same is true for most other members of his species, and for most of their counterparts, then he doesn't have any beliefs and desires, no matter what is written in his head.)
This is not a good way to say what I'm trying to say: "Due to various facts about implementation, some pieces of information stored in our belief system are not as easily accessible as others. Reasoning is making pieces of information accessible." First, it's misleading to speak of pieces of information here. Information is too course-grained. The very same information can be at the same time available for one action but not for another. It's also unclear what accessibility is supposed to mean here. If something like the box 'theory' above is true, we can say that answers to some mentalese queries are easier to find, and in this sense more accessible, than others. But that's armchair speculating about implementation details. The same is true for saying, as I did, that reasoning is finding new representations of old information.
If something like the current proposal works, we don't need hyperintensional content. We can stick to our favourite, course-grained theory of mental representation and still account for reasoning and failures of logical omniscience simply by noticing that employing representational states to guide behaviour is a difficult computational problem.
We're not quite finished though. We have to add a clause to the conditions for content attribution, some of which I listed yesterday,. For we want to say things like "Fred doesn't know the prime factors of 241'815", and "wo doesn't believe that M is a better move than N", even if all the other conditions for belief and knowledge are satisfied. Something like this might do: To qualify for attribution of the belief that S, a subject should be able to apply her belief state (in accordance with her desires) to S-relevant tasks without too much computational effort. I'm afraid there are no very general rules on what counts as an S-relevant task. I guess context can play a big role here, just as for what counts as "too much computational effort". For mathematical S, a relevant task is usually answering the question "S?" (and structurally very similar questions). For S = "that M is the best move in this state of a chess game", a relevant task is choosing a move in that state of a chess game.
The suggested proposal worries me, especially when it is paired with a coarse-grained sets of possible worlds account of propositions.
First, it seems the account would inherit all of the difficulties of contextualism about knowledge ascriptions, and this seems best avoided.
Second, It seems that, given plausible assumptions, the account would have unacceptable consequences. Presumably, it would take very little computational effort to existentially generalize on something one believes. But then 'Hammurabi believes that there is an x such that x is the referent of 'Hesperus' and x is the referent of 'Phosphorous'' would count as true. But it isn't.