Reasoning

What do we do when we draw inferences? We don't acquire new information, at least not if the reasoning is deductively valid. Rather, we try to find new representations of old information. The point of that is perhaps that we can only make our actions depend on representations of information, not directly on the information itself, and some forms of representation lend themselves more easily to guide certain actions than others.

The problem is familiar in programming: to accomplish a given task it is often crucial to find a data structure that makes the relevant properties of the stored data easily accessible. In principle, every data set could be represented as a huge number, but in pratice it helps a lot to represent it in terms of arrays or strings or objects with suitable properties.

Sometimes, the new representations we look for are linguistic representations in our public language. This is particularly obvious if the task we face is to say something. In fact, unless our mind stores information in public language, saying something always involves finding another representations of old information. Moreover, many rules of inference operate on public, linguistic representations. For instance, if asked whether the number of chairs in my flat is even, I will first try to figure out the latin numeral representing the number of tables in my flat and then look whether its last digit is in the list of even digits I've memorized.

When the relevant action is non-verbal, linguistic forms of representation may still be helpful. For instance, when looking at a map in order to find a way, I sometimes try to find simple rules like "left, right, then left again after the church". But there is no compelling reason why I would have to do that. It is at any rate possible that there be a reasoner who doesn't know any language at all. Also, to solve many puzzles you have to visualize a given object in a new way, e.g. as rotated or coloured in a certain manner, which is hard to do with language alone.

Finding the right representations can be tricky. This is a problem for interpretationist theories of mental content: We'd like to say that the content of someone's beliefs is determined by how he is (or better, people in his state are) disposed to behave. But to make sense of reasoning, we have to allow that the content of someone's beliefs can be disassociated from his behaviour because he can't find the right representations.

Comments

No comments yet.

# trackback from on 06 May 2004, 18:05

This is still a bit vague, but anyway. As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 's

# trackback from on 03 May 2004, 19:05

One might suggest that in fact, resoning, just as (factual) learning, always means acquiring new information. It is possible to acquire new information by learning that P even if what one knew before already entailed P. In this case the new information

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.