< 402 older entriesHome374 newer entries >

Quantification in A-intensional semantics

Once upon a time, two quite different roles were assigned to truth-conditions: 1) they are what you know when you understand a sentence and what people communicate with utterances of the sentence; 2) they determine the truth value of the sentence when prefixed with modal operators. Unfortunately, there are sentences where these two roles come apart, namely context-dependent sentences, like "it's raining" and "I am late", and sentences containing rigid designators, like "London is overcrowded" and "Hesperus = Phosphorus". Since virtually all sentences ever uttered belong to one of these two classes (or both), the idea that we can assign to sentences truth-conditions that serve both (1) and (2) must be given up. The common strategy to deal with this at least among philosophers is to regard truth-conditions in the sense of (2) as the proper topic of compositional semantics and to assume that some other ("pragmatic") story will deliver truth-conditions in the sense of (1) out of the truth-conditions in the sense of (2) and various contextual features. I find that cumbersome and unmotivated. In my view, truth-conditions in the sense of (1) should be the primary topic of semantics, and I don't see any reason for the roundabout two-step procedure via truth-conditions in the sense of (2). I wouldn't complain if that procedure turned out to work sufficiently well, but for all I can tell, it doesn't work well at all. So I think it would be better to do compositional semantics directly for truth-conditions in the sense of (1). Since Frank Jackson calls such truth conditions "A-propositions" or "A-intensions", I use "A-intensional semantics" for that project.

Recursive Values

If I'd make a list of how people should behave, it would include things like

  • avoid killing other animals, in particular humans
  • help your friends when they are in need

etc. The list should be weighted and pruned of redundancy, so that it can be used to assign to every possible life a goodness value. Suppose that is done. I wonder if the list should contain (or entail) a rule that says that good people see to it that other people are also good:

More on what we learn from experience

For the "Philosophische Club" at the university of Bielefeld, I've made a short paper out of that entry on perceptual content. The proposal is still that the information we acquire through perception is the information that we have just those perceptual experiences. But more needs to be said about what that amounts to: if "having just those experiences" means having experiences with this fundamental phenomenal charater, the proposal is incompatible with physicalism; if it means having just this brain state, the proposal is false. So I end up defending a kind of analytical functionalism even about demonstratives like "this experience". The main argument has something to do with skeptical scenarios. I won't repeat it here, as the paper itself is short enough.

icmp_seq=1 ttl=250 time=3800000000 ms

When I went to sleep yesterday it was still February. Apparently 2006 is international year of desertification.

Fine on Frege's Puzzle

In his John Locke Lectures, Kit Fine proposes a new solution to Frege's Puzzle (see in particular lecture 2 (warning: 'RTF' format -- unless you use a perfect intrinsic duplicate of Kit Fine's computer, that means you probably have to guess all the logical symbols)).

The puzzle, according to Fine, is that there is an intuitive semantic difference between "Cicero = Cicero" and "Cicero = Tully". That is puzzling on the assumption that the semantic contribution names make to sentences is only their referent.

Note to self

Never dist-upgrade while running out of battery power. At least always backup papers on attitude reports before destroying the file system...

What do we learn from experience?

Looking out of the window, I come to believe that it's snowing outside. I don't just add this single belief to my stock of beliefs; I conditionalize on something. On what?

It doesn't seem to be the proposition that the scene before my eyes contains the very features that caused my perception. Arguably, what caused my perception is H2O falling from the sky. If that was what I conditionalize on, I would take my present experience as evidence that snow is made of H2O, rather than XYZ. But I don't.

Trouble with Centered Worlds

I would like to say that

If X necessarily entails all truths, and if X's A-intension coincides with its C-intension, then X a priori entails all truths.

For suppose X -> P is not a priori for some truth P. Then X -> P is an a posteriori necessity. So we need information about the actual world to know what C-intension X -> P expresses, and whether it is true. But by assumption, this information is already contained in X, and since X's C-intension coincides with its A-intension, it cannot be hidden away in X so that we'd need further information to find out that X contains that information. Hence X a priori entails X -> P; and so X -> P is itself a priori.

Thanks, yes, I'm fine

It's just been rather cold here for the last few weeks, so I've spent most of my free time in cafes, writing a lengthy paper on the semantics of attitude reports. (I'll post it when it's reached Draft status.) In addition, the OS on my desktop computer is really broken right now. I guess I'll have to switch to something that works.

Making a Difference

This argument looks a lot better than it is:

Suppose some physical event E is causally necessitated by a certain distribution of physical properties P. Then if P occurs, E is bound to occur as well, no matter what else is the case. In particular, whether or not some non-physical event M also occurs before E will make no difference to E's occurrence. (Perhaps M nevertheless causes E, if E is overdetermined, or perhaps M is causally relevant in some even weaker sense, but at any rate M does not make a difference for E.)

To see the problem with this argument, consider a deterministic world where the occurrence of any event E at time t0 is causally necessitated by the state of the world at t-2 (before t): it obviously does not follow that the state of that world at t-1 makes no difference to E's occurrence.

< 402 older entriesHome374 newer entries >