Suppose you and I both face a choice between several different
options. Say, we both have to pick a ball out of a bag of 100
balls. We win a prize if we make the same choice. But we have no means
to communicate. Moreover, our only relevant interest is to win the prize, otherwise we are completely indifferent about the options.
If one of the options is somehow salient, say one ball is red and all the others white, most people will choose that one. And
wisely so, as many people following this strategy win the prize,
whereas hardly anyone picking a white ball does. However, is this a
rational decision among perfectly rational agents who know of
each other's rationality and preferences? (I also assume that the
agents know that they make exactly the same judgements about
salience.)
Last week the RSI got worse, and this week I've spent some more time on the tree prover. Here's the current version. It works in Mozilla and (slower) Opera on Linux, and doesn't work in Konqueror. I don't have any other browsers here, so feedback on how it behaves especially in Safari and Internet Explorer is welcome.
The prover is generally faster and more stable than the old one. But it still does badly on some formulas, like ((x(PxRx)y(QySy))(zPzzQz)xy((PxQy)(RxSy))). There are some improvements (e.g. merging) under the hood that would improve the performance, but are currently turned off because they make it very hard to translate the resulting free-variable tableau into a sentence tableau. My plan is to turn these features on automatically when a proof search takes too long, and not to display a tree in that case. I'm also thinking about trying to find simpler proofs after a first proof has been found: the tableau for the above formula doesn't look like it's the smallest possible proof.
To improve the detection of invalid formulas, I've added a very simple countermodel finder. What it does is simply check all possible interpretations of the root formula on the sets { 0 }, { 0,1 }, etc. This works surprisingly well since many interesting invalid formulas have a countermodel with a very small domain. The countermodels are currently not displayed, but that will change soon.
On one of our many conceptions of meaning, the meaning of an expression is what you know when you know the meaning of the expression. I don't think this is a particularly useful conception. Besides, it violates some commonplace truths about meaning, like that expressions of different languages can have the same meaning. For suppose the meaning of the German "schwarz" is identical to the meaning of the English "black". Then by the above rule anyone who knows the meaning of "black" should know the meaning of "schwarz", which isn't so.
A sentence is true in a fiction iff it is true at certain worlds, say, at the closest worlds where the pretense which the narrator and the audience engage in is not only pretense. But to evaluate whether the sentence is true at a world, do we treat the world as actual or as counterfactual?
It seems that there could easily be stories in which water isn't H20, and Hesperus isn't Phosphorus. This suggests that the worlds must be treated as actual. However, it isn't clear that these terms ("water" etc.) are sufficiently rigid, and if not, there are also worlds as counterfactual where the identities fail. Could there be a story in which the stuff that actually is water isn't the stuff that actually is H2O? I'm not sure.
"Dynamical basis of intentions and expectations in a simple neuronal network" (PNAS subscription required, there's a free abstract):
[R]ecent indirect evidence suggests that intentions and expectations may arise in behavior-generating networks themselves even in primates [...]. In that case, interestingly, the intentions and expectations inferred from behavioral observations are not always identical to the intentions and expectations that are consciously accessible [...]. In this study we have demonstrated how such intentions and expectations arise automatically in the feeding network of Aplysia.
The "intentions and expectations" found are basically this: if you repeatedly present an A-stimulus to one of Aplysia's central pattern generators, and then switch to a B-stimulus, the pattern generator will respond as if it received another A-stimulus. Only after several B-stimuli will it switch to responses adequate for B. In this sense the animal expects to receive further A-stimuli, and intends to produce further A-behaviour. In a similar, slightly strechted, sense one could say that the animal believes to be in an A-environment (which is an environment containing seaweed). This belief is a certain state of the synapse linking Aplysia's neurons B20 and B8.
I'm back. Here's a question that occurred to me while I was listening to Dave Chalmers's talk on scrutability.
First some background. One might think that for every world w there is a complete description D true at w such that all and only the sentences true at w follow a priori from D: simply let D contain all sentences true at w. Then all sentences true at w will be a priori entailed by D. However, if "true at" is read counterfactually, sometimes sentences false at w will also be so entailed. Consider Twin World where XYZ occupies the water role. "Water doesn't occupy the water role" is true at Twin World. But "water occupies the water role" is a priori, and hence a priori entailed by everything1. Thus every complete description of Twin World a priori entails a contradiction (and every sentence whatever).
For the next few days I'll be in Konstanz at the 'Concepts and the A Priori' conference. I'll probably stay a bit longer in southern Germany and Switzerland to visit some friends and relatives and mountains before I return to Berlin in about a week ot two.
If the individuation of mental states depends at least partly on their causal roles, then it depends on the laws of nature (including possibly psychophysical laws). For if the laws differ between world 1 and world 2, a state with a given intrinsic nature can have causal role R in world 1 but lack R in world 2.
Assume world 1 is our world and world 2 is a world that contains a perfect spatiotemporal duplicate of our galaxy but lots of weird things elsewhere that contradict our laws. So the laws of world 2 are not the laws of our world. Then our duplicates in world 2 could have quite different mental states than we do.
But that sounds strange. I would have thought that my mental states do not depend upon what goes on outside the milkyway. We might also get the externalist problem about self-knowledge: If whether I believe P or Q depends on far away events, how can I know I believe P rather than Q if I don't know about these far away events?
Jonathan Schaffer argues (in Analysis 2001) that Relevant Alternatives Theories of knowledge (RATs) such as Lewis's fail because of Missed Clues cases:
Professor A is testing a student, S, on ornithology. Professor A shows S a goldfinch and asks, 'Goldfinch or canary?' Professor A thought this would be an easy first question: goldfinches have black wings while canaries have yellow wings. S sees that the wings are black (this is the clue) but S does not appreciate that black wings indicate a goldfinch (S misses the clue). So S answers, 'I don't know'.
We want to say that S doesn't know that the bird is a goldfinch. Yet it seems that S's evidence rules out all relevant alternatives. For situations with goldfinch-perceptions but no goldfinches are skeptical scenarios and usually regarded as irrelevant.
The computer is working again. Now I have to catch up with 200 non-junk mails.