< 269 older entriesHome506 newer entries >

At i, A is not F

I've thought a bit more about the comments Michael Fara left last week, and I don't find my points very convincing any more. The following is partly a correction, but mostly just thinking out loud about a more general semantic question.

The general question is how to interpret sentences of the form

1) At i, A is F
2) At i, A is not F

where 'i' denotes something like a time or a place or a world. There are a dozen proposals for interpretations of (1) in the temporal case, invoking temporal parts or relations to times or whatever. Most of these proposals can be applied to other indices as well. But let's put that aside. Suppose we understand how to interpret instances of (1) in easy cases. The hard cases I have in mind are cases where A doesn't exist exactly once at i. The precise definition of these cases depends on the question I've put aside, but I hope it is reasonably clear what I mean anyway. Not existing exactly once at i means either not existing at i at all, or multiply existing at i. Plausible examples of the first kind: I do not exist in 1758; I do not exist on Alpha Centauri; I do not exist at any world containing only empty space-time. Controversial examples of the second kind: if I get split into two persons tonight, I will doubly exist tomorrow; if river R has two branches where is crosses the border to country C, R doubly exists at the border to C; if at some world, two people resemble me to exactly the same degree in all extrinsic and intrinsic respects, I doubly exist at that world.

Tableaux Trouble

I have a problem with the new, free-variables powered version of my tree prover (only existing here on my hard disk at the moment): It doesn't terminate on some valid formulas, at least not within reasonable time.

A very nice feature of ordinary tableaux is that there is a mechanical procedure for building a ("canonical") tableau that will always close as long as there is any closed tableau for the input formula. To my knowledge, no such procedure has been found for free-variable tableaux. The problem, I think, is to decide at each stage whether to apply an ordinary expansion rule or the Closure rule. For many trees, it is best to apply Closure after every expansion. But for some formulas, this procedure will leave the tree forever open. The common response to this problem is apparently to try out all possible decisions at every point, using backtracking and iterated deepening of the search space.

Reasoning V (Limited Rationality)

This is still a bit vague, but anyway.

As I remarked in the first part of this little series, from an implementation perspective, it is not surprising that applying one's beliefs and desires to a given task requires processing. Consider a 'sentences in boxes' implementation of belief-desire psychology: I have certain sentence-like items stored in my belief module, and other such items in my desire module. When I face a decision, I run a query on these modules. Suppose the question is whether I should take an umbrella with me. The decision procedure may then somehow find the sentences "It is raining" and "If I take an umbrella, I don't get wet" (or rather, their Mentalese translations) in the belief box and "I don't get wet" in the desire box. From these it somehow infers the answer, that I should take the umbrella.

Reasoning IV (Hyperintensional Content)

Why not simply use a notion of content on which belief isn't closed under strict implication? Then it will be much easier to say that reasoning always delivers new content.

There is no shortage of fine-grained notions of content. We could use English sentences, or classes of intensionally isomorphic sentences, or bundles of tuples of objects and properties ('singular propositions') together with modes of presentation, or whatever. The tricky part is to say what determines whether a subject has a belief or desire with such a content.

Reasoning III (Fragmentation)

As Robbie Williams remarked in the comments, perhaps what we do when we reason is putting parts of our fragmented belief space together. However, I doubt that this will do as a general solution.

First, at least in the context of an interpretationist account of content, it doesn't suffice for fragmentation that the relevant beliefs are somehow stored in different parts of the brain. Rather, if my beliefs are fragmented, say, into a compartment in which I believe P and one in which I don't, this must show up in my behaviour, more or less as follows: 1) In some contexts, the best explanation of some of my actions involves the assumption that I take the world to be P; but also 2) in some contexts, the best explanation of some of my actions involves the assumption that I don't take the world to be P; Moreover, 3) the discrepancy can't be explained as a change of belief.

Reasoning II

One might suggest that in fact resoning, like (factual) learning, always means acquiring new information. After all, it is possible to acquire new information by learning that P even if what one previously knew already entailed P. In this case the new information can't be P, but it can be something else. To use Robert Stalnaker's favourite example. when you learn that all ophtalmologists are eye-doctors, the possibilities you can thereby exclude are not possibilities where some ophtamologists aren't eye-doctors -- there are no such possibilites. Rather, they are possibilities where "ophtalmologist" means something different. You've acquired information about language. Perhaps what you learn when you learn that the square root of 1156 is 34 is similarly something about language, in this case about mathematical expressions. That explains why we can't replace synonymous expressions in the content attribution: Just as it would be wrong to say you've learned that all eye-doctors are eye-doctors, so here it would be wrong to say you've learned that the square root of 34*34 is 34.

Reasoning

What do we do when we draw inferences? We don't acquire new information, at least not if the reasoning is deductively valid. Rather, we try to find new representations of old information. The point of that is perhaps that we can only make our actions depend on representations of information, not directly on the information itself, and some forms of representation lend themselves more easily to guide certain actions than others.

The problem is familiar in programming: to accomplish a given task it is often crucial to find a data structure that makes the relevant properties of the stored data easily accessible. In principle, every data set could be represented as a huge number, but in pratice it helps a lot to represent it in terms of arrays or strings or objects with suitable properties.

Jackson on Truth at Worlds Considered as Actual

Something interesting seems to happen on pp.261f. of Frank Jackson's "Why We Need A-Intensions" (Phil. Studies, March 2004):

How is truth at a world under the supposition that that world is the actual world related to truth at a world simpliciter? It would be good to have an assurance that there are no problems special to the former, as Ned Block convinced me [...]. For some sentences, their A-intension is one and the same as their C-intension. [...] For them, truth at a world and truth at a world under the supposition [that] it is the actual world are one and the same. There is a difference between a sentence's A- and C-intensions if and only if the evaluation of the sentence at a world requires reference back to the way the actual world is as a result of some explicit or implicit appearance of "actually", or an equivalent rigidification device, in the sentence. But when this happens, the role of worlds in settling truth values is the standard one, the one that applies when it is C-intensions that are in question. The only difference is that the value at every world but one depends in part or in whole on how things are at another world. There is no difference in the role of how things are at worlds in settling truth values; the difference is in which worlds are in play. To put the point in terms of a simple example: (a) "The actual F is G" is true at w under the supposition that w is the actual world iff "The F is G" is true at w; and (b) what follows "iff" in (a) contains "is true at w" and not "is true at w under the supposition that w is the actual world".

Many people have complained that they don't understand what it means to evaluate a sentence in a world considered as actual, or that however that is to be done, it won't deliver the results Jackson promises.

Translating Free-Variable Tableaux

A free-variable tableau with root

for allx(Fx & existsy~Fy)

closes after 4 nodes and 1 application of Closure. A standard tableau, on the other hand, will have at least 6 nodes because the root formula must be used twice. So translating free-variable tableaux into standard tableaux is not as easy as I once thought. I wonder if it would suffice to add a rule to the free-variable construction that forbids Closure to unify a variable with a constant that first appears after the variable on the branch.

Trans-Index-Identity and Hyper-Essentialism (Again)

I need to tidy up this part of my belief space. Once I complained that literal trans-world identity (as opposed to trans-world identity based on similarity) is implausible because it entails that there can be no vagueness about a thing's essential properties (for determinate properties): either the thing has the property at all worlds or not. On the other hand, I also believe that there is no big difference between individuating things as worldbound and individuating them as trans-world fusions of worldbound counterparts. Unfortunately, these two views can't both be correct.

< 269 older entriesHome506 newer entries >