One might suggest that in fact resoning, like (factual) learning, always means acquiring new information. After all, it is possible to acquire new information by learning that P even if what one previously knew already entailed P. In this case the new information can't be P, but it can be something else. To use Robert Stalnaker's favourite example. when you learn that all ophtalmologists are eye-doctors, the possibilities you can thereby exclude are not possibilities where some ophtamologists aren't eye-doctors -- there are no such possibilites. Rather, they are possibilities where "ophtalmologist" means something different. You've acquired information about language. Perhaps what you learn when you learn that the square root of 1156 is 34 is similarly something about language, in this case about mathematical expressions. That explains why we can't replace synonymous expressions in the content attribution: Just as it would be wrong to say you've learned that all eye-doctors are eye-doctors, so here it would be wrong to say you've learned that the square root of 34*34 is 34.
What do we do when we draw inferences? We don't acquire new information, at least not if the reasoning is deductively valid. Rather, we try to find new representations of old information. The point of that is perhaps that we can only make our actions depend on representations of information, not directly on the information itself, and some forms of representation lend themselves more easily to guide certain actions than others.
The problem is familiar in programming: to accomplish a given task it is often crucial to find a data structure that makes the relevant properties of the stored data easily accessible. In principle, every data set could be represented as a huge number, but in pratice it helps a lot to represent it in terms of arrays or strings or objects with suitable properties.
Something interesting seems to happen on pp.261f. of Frank Jackson's "Why We Need A-Intensions" (Phil. Studies, March 2004):
How is truth at a world under the supposition that that world is the actual world related to truth at a world simpliciter? It would be good to have an assurance that there are no problems special to the former, as Ned Block convinced me [...]. For some sentences, their A-intension is one and the same as their C-intension. [...] For them, truth at a world and truth at a world under the supposition [that] it is the actual world are one and the same. There is a difference between a sentence's A- and C-intensions if and only if the evaluation of the sentence at a world requires reference back to the way the actual world is as a result of some explicit or implicit appearance of "actually", or an equivalent rigidification device, in the sentence. But when this happens, the role of worlds in settling truth values is the standard one, the one that applies when it is C-intensions that are in question. The only difference is that the value at every world but one depends in part or in whole on how things are at another world. There is no difference in the role of how things are at worlds in settling truth values; the difference is in which worlds are in play. To put the point in terms of a simple example: (a) "The actual F is G" is true at w under the supposition that w is the actual world iff "The F is G" is true at w; and (b) what follows "iff" in (a) contains "is true at w" and not "is true at w under the supposition that w is the actual world".
Many people have complained that they don't understand what it means to evaluate a sentence in a world considered as actual, or that however that is to be done, it won't deliver the results Jackson promises.
A free-variable tableau with root
x(Fx & y~Fy)
closes after 4 nodes and 1 application of Closure. A standard tableau, on the other hand, will have at least 6 nodes because the root formula must be used twice. So translating free-variable tableaux into standard tableaux is not as easy as I once thought. I wonder if it would suffice to add a rule to the free-variable construction that forbids Closure to unify a variable with a constant that first appears after the variable on the branch.
I need to tidy up this part of my belief space. Once I complained that literal trans-world identity (as opposed to trans-world identity based on similarity) is implausible because it entails that there can be no vagueness about a thing's essential properties (for determinate properties): either the thing has the property at all worlds or not. On the other hand, I also believe that there is no big difference between individuating things as worldbound and individuating them as trans-world fusions of worldbound counterparts. Unfortunately, these two views can't both be correct.
I now have a 128 bit SSL certificate for umsu.de. For Postbote, you should now use the address https://www.umsu.de/post/ (with 'https' instead of 'http'), as that makes it is much harder for other people to access the transmitted data. (The same holds for Ned, but as far as I know I'm the only one who uses that.)
A few comments on Counterparts and Actuality by Michael Fara and Timothy Williamson (via Brian, of course).
Fara and Williamson argue that if Quantified Modal Logic is enriched by an "actually" operator, then given some further assumptions there is no correct translation scheme from QML to Counterpart Theory. Here, a correct translation scheme is one that translates theorems of QML into theorems of CT and non-theorems of QML into non-theorems of CT. (theorems of which QML? -- good question; read on.).
Lewis defends a kind of best system theory both with respect to laws of nature and with respect to mental content: something is a law of nature iff (roughly) it is part of the best theory about our world; somebody believes that snow is white iff (roughly) this is what best makes sense of his behaviour according to our belief-desire psychology.
In both cases, it looks on first sight as if the theory introduces an implausible relativity into its subject matter: We don't want to say that the laws of nature depend on what we happen to find simple (but simplicity is part of what makes a theory good), and we don't want to say that what someone believes and fears depends on what we think about his behaviour.
On page 305 of "Assertion Revisited" (in the latest issue of Phil.Studies), Robert Stalnaker suggests that the information conveyed by an utterance is the diagonal proposition associated with the utterance iff it is unclear in the relevant context which horizontal proposition the utterance expresses:
[T]he relevant maxim is that speakers presume that their addressees understand what they are saying. In terms of the two-dimensional apparatus, this presumption will be satisfied if and only if the propositional concept for the utterance [a function that assigns to every relevant possible context the horizontal proposition expressed by the utterance in that context] is constant, relative to the possible worlds that are compatible with the context. Our problematic example [of saying "Hesperus is Phosphorus" to O'Leary who doesn't yet know that Hesperus is Phosphorus], and all cases of necessary truths that would be informative (in the sense that the addressee does not already know that they are true) will be prima facie counterexamples to this maxim, and so will require reinterpretation [so that what is said is the diagonal, not the horizontal proposition].
Three comments:
I'm busy with lots of other things. Blogging will probably resume sometime next week.