Reasoning II

One might suggest that in fact resoning, like (factual) learning, always means acquiring new information. After all, it is possible to acquire new information by learning that P even if what one previously knew already entailed P. In this case the new information can't be P, but it can be something else. To use Robert Stalnaker's favourite example. when you learn that all ophtalmologists are eye-doctors, the possibilities you can thereby exclude are not possibilities where some ophtamologists aren't eye-doctors -- there are no such possibilites. Rather, they are possibilities where "ophtalmologist" means something different. You've acquired information about language. Perhaps what you learn when you learn that the square root of 1156 is 34 is similarly something about language, in this case about mathematical expressions. That explains why we can't replace synonymous expressions in the content attribution: Just as it would be wrong to say you've learned that all eye-doctors are eye-doctors, so here it would be wrong to say you've learned that the square root of 34*34 is 34.

But if you acquire new information on learning that the square root of 1156 is 34, presumably you can acquire the same information by figuring it out yourself. So perhaps reasoning always means learning new information?

In a sense, of course we usually learn new information when we reason. For instance, when I've calculated the square root of 1156, I will probably have learned that I just calculated the square root of 1156. But that doesn't really help, for that's not the information that enables me to say afterwards that 34 is the square root of 1156, and the current plan was to explain this new ability in terms of new information. A better candidate is again some kind of meta-linguistic fact. Perhaps I've learned that the expressions "square root of 1156" and "34" corefer.

But that would mean that I've excluded the possibility that they don't. That is, before I made the calculation, there were worlds in my belief set where "square root of 1156" and "34" don't corefer. What are these worlds like? Are they worlds where "1156" denotes 1157? Or worlds where "square root" means "square root minus 1"? But couldn't I exclude such worlds even beforehand?

Anyway, isn't it strange to think that merely by reasoning, I can exclude previously open possibilities? Where does the information come from if it wasn't already implicit in my previous beliefs?

Moreover, as I said in the last post, not every kind of reasoning involves finding new linguistic representations. Imagine a Martian who doesn't speak any language but is very good at chess. There he sits thinking about his next move. What does he learn by thinking? Certainly not something about the semantic properties of sentences. It also seems odd to say that he learns something about the properties of his own mental representations. Couldn't he be quite ignorant of his mental representations?

So I think the new information strategy doesn't work. At least not always.

Comments

# on 04 May 2004, 10:59

Just one question:
You say that
(1) "... it is possible to acquire new information by learning that P..."
and
(2) "... when you learn that all othtalmologists are eye-doctors, the possibilities you can thereby exclude...".

And:
(3) "Perhaps I've learned that the expressions "square root of 1156" and "34" corefer."

(4) "But that would mean that I've excluded the possibility that they don't."

What does it mean? Is it the case, that after I've learned, that all opthalmologists are eye-doctors, I can thereby exclude the possibilities? Does it mean, that my learning entails my ability to exclude such possibilities? But in (3) and (4) it sounds a little bit like the following: The ability to exclude the possibilities is a precondition for learning that the expressions "square root of 1156" and "34" corefer.


# on 04 May 2004, 13:53

''where does the information come from, if it wasn't already implicit in my previous beliefs''. Here's my attempt to reconstruct a Stalnakerian answer (apologies if this is old news).

Consider the analogous case with a community. I know p. You know if p then q. We get together, integrate our information and come to know q. Before integration, in no circumstances would our community have behaved as if q; afterwards, it would. But in appropriate circumstances (when I was the principle agent) it would have behaved as if p; and in other circumstances (when you were the principle agent) it would have behaved as if 'if p then q'.

Analogously, my belief state could be fragmented: particular pieces of information held by distinct subpersonal devices. If 'reasoning' includes integrating the 'belief states' of distinct subpersonal information storage devices, then we can, in principle, explain how possibilities get eliminated through reasoning.

How this would deal with specific worrying cases (e.g. maths) is another matter. Also, note that now, assigning a belief state to a particular person is now going to be *at best* an idealization --- to get a more accurate picture, we'd have to assign multiple belief states. Call in the psychologists!

# on 04 May 2004, 17:31

Enwe, I'm not sure I understand your question, but yes, I think it's plausible that when you learn that all ophthalmologists are eye-doctors, you exclude alternative possibilities about the meaning of "ophtalmologist". And I agree that answer sounds less plausible in the maths case.

Robbie, the fragmentation answer is just what I wanted to blog about today, so all I'll say here is thanks for providing a neat transition!

(BTW, I intend to write about hyperintensional approaches tomorrow, and then try to sketch a solution that involves tinkering with the rationality constraints. So far, this "solution" is only a very vague idea though, so I don't know if the plan will work.)

# on 04 May 2004, 19:23

About the Martian. That someone doesn't possess the conceptual resources used in characterizing the content of a belief ascription shouldn't prevent the Stalnakerian from making that ascription. All that is needed, for the Stalnakerian, is that they (as theorists/belief ascribers) pick out that set of possible worlds p, such that the following holds: supposing the situations in p to be the live possibilities for X best rationalizes X's behaviour.

E.g. It can be correct to ascribe the belief <that dinner's on the table> to my dog, despite his lack of the concept 'dinner'.

Just so, when Stalnaker characterizes the belief set via metalinguistic sentences `S expresses a truth', he's not thereby ascribing metalinguistic concepts. He's using such concepts to pick out a relevant set of worlds. And, of course, he has a theory about how run-of-the-mill belief ascriptions can accomplish the same task without invoking metarepresentational concepts (diagonalization).

So Stalnaker can ascribe a belief <that 'Hesperus' and 'Phosphorus' corefer> to a subject, without being committed to that subject's having the concept 'reference'. All this is needed to deal with the 'easy' problem of deduction cases: e.g. a wordless Martian's non-trivial belief in a Kripkean necessary a posteriori proposition (Hesperus is Phosphorus). Qua theorists, we here need to characterize the Martian's belief state using metarepresentational concepts.

The true problem of deduction --- necessary a priori --- is really hard, but, if the above is correct, pointing to conceptually deprived subjects does not make it any harder.

# on 04 May 2004, 19:45

Hm. Isn't a diagonalized proposition on Stalnaker's view a proposition about a token sentence? True, you don't need to have the concept 'dinner' to belief that dinner is on the table, or the concept 'reference' to believe that 'Hesperus' refers to Hesperus. But don't you need to know about the existence of the term 'Hesperus' in order to know the latter? And don't you need to know about the existence of a sentence token in order to know its diagonal? The point of the Martian isn't that he is conceptually deprived. It is that he knows nothing about the existence of terms or sentences.

Anyway, I agree that necessary a posteriori truths create no real problem of logical omniscience because we can always find a contingent proposition nearby. (Except if non-reductive materialists have their way.) To this end, I would however prefer A-intensions a la Jackson and Chalmers over diagonalized propositions a la Stalnaker.

By the way, I'm not sure if Stalnaker would agree that belief content is determined by what best rationalizes behaviour. Lewis and Jackson would, but I think Stalnaker would rather explain content in some Dretske-like causal-informational way. (That's more explicit in his newer papers, but it already occurs in Inquiry, pp.18-20.)

# on 04 May 2004, 20:04

Mea Culpa. You're right about Stalnaker and his Dretskian tendencies. But he also emphasises the role that belief-desire-action rationalization has to play. I guess I think of the causal stuff doing for Stalnaker what natural properties and substantive rationality do for Lewis, i.e. imposing constraints on correct attribution of attitudes, while the main work is being done by the decision theory. But perhaps that's a misleading way of presenting his view. Anyway, the indication relations that Stalnaker invokes are just as externalist/independent of concepts as the decision theory bit...

# on 04 May 2004, 20:36

Are diagonal propositions 'about' the existence of token sentences/internal representations in any damaging way?

A. Contrast two ways my belief set could entail that T exists: (1) I have the ability to distinguish possibilities in which T exists from those in which it doesn't. I plump for the former. (2) I have no such ability to distinguish: further, in all the possibilities which I can consider as live options, T exists (inter alia). (1) plausibly requires conceptual competence. (2) is a much more primitive kind of state: it just requires that T's existence be something we take for granted in acting.

B. The same kind of dizziness we find in the Martian case occurs when we attempt to ascribe 'hinge' beliefs to animals and small children: that the world exists, that time is linear etc. This looks wrong, but presumably these things follow from their belief sets. (They're presupposed by the kind of behaviour they engage in.)

C. I think care is needed when appealing to notions of 'aboutness' for propositions. What sentences are 'about' is clear: they're about the things to which the expressions in them refer. To get a grip on what (unstructured) propositions are 'about' we need to define up a new theoretical notion (as Lewis does). But then we need to make sure we're not transferring to this new notion intuitions properly concerning sentence-aboutness.

# on 04 May 2004, 21:18

Agreed, I shouldn't have used "about". Still, it seems perfectly possible to me that there be a reasoner who doesn't know that there are linguistic items. Not because he lacks concepts, but simply because he has no opinion on the matter. (Nor need he somehow presuppose the existence of linguistic items in his actions.) But then it is wrong to attribute to him any diagonal or otherwise meta-linguistic beliefs, i.e. any beliefs whose content only comprises worlds in which a certain linguistic item exists.

# pingback from on 04 May 2004, 18:05

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.