If I'd make a list of how people should behave, it would include
things like
- avoid killing other animals, in particular humans
- help your friends when they are in need
etc. The list should be weighted and pruned of redundancy, so that it
can be used to assign to every possible life a goodness value. Suppose
that is done. I wonder if the list should contain (or entail) a
rule that says that good people see to it that other people are also
good:
For the "Philosophische Club" at the university of Bielefeld, I've made a short paper out of that entry on perceptual content. The proposal is still that the information we acquire through perception is the information that we have just those perceptual experiences. But more needs to be said about what that amounts to: if "having just those experiences" means having experiences with this fundamental phenomenal charater, the proposal is incompatible with physicalism; if it means having just this brain state, the proposal is false. So I end up defending a kind of analytical functionalism even about demonstratives like "this experience". The main argument has something to do with skeptical scenarios. I won't repeat it here, as the paper itself is short enough.
When I went to sleep yesterday it was still February. Apparently 2006 is international year of desertification.
In his John Locke Lectures, Kit Fine proposes a new solution to Frege's Puzzle (see in particular lecture 2 (warning: 'RTF' format -- unless you use a perfect intrinsic duplicate of Kit Fine's computer, that means you probably have to guess all the logical symbols)).
The puzzle, according to Fine, is that there is an intuitive semantic difference between "Cicero = Cicero" and "Cicero = Tully". That is puzzling on the assumption that the semantic contribution names make to sentences is only their referent.
Never dist-upgrade while running out of battery power. At least always backup papers on attitude reports before destroying the file system...
Looking out of the window, I come to believe that it's snowing
outside. I don't just add this single belief to my stock of beliefs; I
conditionalize on something. On what?
It doesn't seem to be the proposition that the scene before my eyes
contains the very features that caused my perception. Arguably, what
caused my perception is H2O falling from the sky. If that was what I
conditionalize on, I would take my present experience as
evidence that snow is made of H2O, rather than XYZ. But I don't.
I would like to say that
If X necessarily entails all truths, and if X's
A-intension coincides with its C-intension, then X a priori entails
all truths.
For suppose X -> P is not a priori for some truth
P. Then X -> P is an a posteriori necessity. So we need
information about the actual world to know what C-intension X -> P expresses, and whether it is true. But by assumption, this
information is already contained in X, and since X's C-intension
coincides with its A-intension, it cannot be hidden away in X so that
we'd need further information to find out that X contains that
information. Hence X a priori entails X -> P; and so
X -> P is itself a priori.
It's just been rather cold here for the last few weeks, so I've
spent most of my free time in cafes, writing a lengthy paper on
the semantics of attitude reports. (I'll post it when it's reached
Draft status.) In addition, the OS on my desktop
computer is really broken right now. I guess I'll have to switch to something that works.
This argument looks a lot better than it is:
Suppose some physical event E is causally necessitated by a certain distribution of physical properties P. Then if P occurs, E is bound to occur as well, no matter what else is the case. In particular, whether or not some non-physical event M also occurs before E will make no difference to E's occurrence. (Perhaps M nevertheless causes E, if E is overdetermined, or perhaps M is causally relevant in some even weaker sense, but at any rate M does not make a difference for E.)
To see the problem with this argument, consider a deterministic world where the occurrence of any event E at time t0 is causally necessitated by the state of the world at t-2 (before t): it obviously does not follow that the state of that world at t-1 makes no difference to E's occurrence.
For some reason, I find Moore's refutation of idealism ("here is a hand; therefore there is an external world") much more convincing than his refutation of skepticism ("I know that here is a hand; therefore I know that I am not a brain in a vat".) Why is that?
In both cases, Moore's argument would not convince his opponent who would obviously reject Moore's premise. So that's not the difference. I think the difference also isn't that skepticism is a philosophically stronger position than idealism. Rather, it seems to me that the premise against idealism is much more certain than the anti-skeptical premise. That here is a hand (or at least that there are hands) is about as certain as non-logical truths get, that I know that here is a hand is not. If I were to compile a list of Moorean facts -- of facts that are at least as certain as any philosophical argument against them --, I would include all kinds of facts about material objects, other people, experiences, mathematics and modality, but knowledge claims probably wouldn't make the list.