< 564 older entriesHome212 newer entries >

Gallois on occasional identity

In the (Northern) summer, I wrote a short survey article on contingent identity. The word limit did not allow me to go into many details. In particular, I ended up with only a brief paragraph on Andre Gallois's theory of occasional identity, although I would have liked to say a lot more. So here are some further thoughts and comments on Gallois's account.

In his 1998 monograph Occasions of Identity, Gallois defends the view that things can be identical at some times and worlds and non-identical at others. For simplicity, I'll focus only on the temporal dimension here. Gallois begins with a long list of scenarios where it is intuitive to say that things are identical at one time but not at others. For example, when an amoeba A fissions into two amoebae B and C, it is tempting to say that B and C were identical prior to the fission and non-identical afterwards.

Representation theorems and the indeterminacy of mental content

To what extent are the beliefs and desires of rational agents determined by their actual and counterfactual choices? More precisely, suppose we are given a preference order that obtains between a possible act A and a possible act B iff the relevant agent is disposed to choose A over B. Say that a pair (C,V) of a credence function C and a utility (desirability) function V fits the preference order iff, whenever A is preferred over B, then A has higher expected utility than B by the lights of (C,V). Now, to what extent does a rational preference order constrain fitting credence-utility pairs?

Paper jam

Some recent papers:

Counterexamples to Stalnaker's Thesis

I like a broadly Kratzerian account of conditionals. On this account, the function of if-clauses is to restrict the space of possibilities on which the rest of the sentence is evaluated. For example, in a sentence of the form 'the probability that if A then B is x', the if-clause restricts the space of possibilities to those where A is true; the probability of B relative to this restricted space is x iff the unrestricted conditional probability of B given A is x. This account therefore valides something that sounds exactly like "Stalnaker's Thesis" for indicative conditionals:

Possible worlds and non-principal ultrafilters

It is natural to think of a possible world as something like an extremely specific story or theory. Unlike an ordinary story or theory, a possible world leaves no question open. If we identify a theory with a set of propositions, a possible world could be defined as a theory T which is

  1. maximally specific: T contains either P or ~P, for every proposition P;
  2. consistent: T does not contain P and ~P, for any proposition P;
  3. closed under conjunction and logical consequence: if T contains both P and Q, then it contains their conjunction P & Q, and if T contains P, and P entails Q, then T contains Q.

It is often useful to go in the other direction and identify propositions with sets of possible worlds. We can then analyse entailment as the subset relation, negation as complement and conjunction as intersection. Of course, we may not want to say that a world is a (non-empty) set of (consistent) propositions and also that a consistent proposition is a non-empty set of worlds, since these sets should eventually bottom out. But that doesn't seem very problematic, and it is easily fixed as long as there is a simple 1-1 correspondence between worlds and logically closed, consistent and maximally specific theories. In particular, one might suspect that on the present definitions, every logically closed, consistent and maximally specific theory uniquely corresponds to a possible world, namely the sole member of the intersection of the theory's members.

Poor one-boxers

Imagine you're a hedonist who doesn't care about other people, nor about your past or your distant future. All you care about is how much money you can spend today. Fortunately, you're on a pension that pays either $100 or $1000 every day, plus an optional bonus. How much you get is determined as follows. Every morning, a psychologist shows up to study your brain. Then he puts two boxes in front of you, one opaque, the other transparent. You can choose to take either both boxes or only the opaque one. The transparent box contains a $10 bill. The opaque box contains nothing if the psychologist has predicted that you will take both boxes; if he has predicted that you will take one box, it contains $100. The psychologist's predictions are about 99% accurate. The content of your boxes is your bonus payment. In addition, you get your ordinary payment, which is either $100 or $1000 depending on how many boxes you took the previous day: if you took both, you now get $1000, otherwise $100. The ordinary payment is given to you before the psychologist studies your brain, so by the time you choose between the two boxes, you already know how much you received. What do you do?

Travel plans

I will probably be in Germany from about mid May until the end of June this year.

Two new papers

One: Variations on a Montagovian Theme.
Two: Belief Dynamics across Fission.

As always, comments are much appreciated.

Self-locating belief and diachronic Dutch Books

If beliefs are modeled by a probability distribution over centered worlds, belief update cannot work simply by conditionalisation. How then does it work? The most popular answer in philosophy goes as follows.

Let P an agent's credence function at time t1, P' the credence function at t2, and E the evidence received at t2. Since E is a centered proposition, it can be true at multiple points within a world. Suppose, however, that the agent assigns probability 0 to worlds at which E is true more than once. Then to compute P', first conditionalise P on the uncentered fragment of E -- i.e. the strongest uncentered proposition entailed by E. This rules out all worlds at which E is true nowhere. Second, move the center of each remaining world to the (unique) point at which E is true.

Assessing the evidence differently

Alice is randomly selected from her population to be tested for a rare genetic disorder that affects about one in 10,000 people. The test is accurate 99 percent of the time, both among subjects that have the disorder and among subjects that don't. Alice's test comes back positive.

Call the information in the previous paragraph E, and suppose it's all you know about the situation. How confident are you that Alice has the disorder?

Letting our subjective probabilities be guided by the stated frequencies, we can use Bayes' Theorem to figure out that P(disorder | positive) = P(positive | disorder) * P(disorder) / (P(positive | disorder) * P(disorder) + P(positive | ~disorder) * P(~disorder)) = 0.99 * 0.0001 / (0.99 * 0.0001 + 0.01 * 0.9999) = 0.0098. Assume then that your degree of belief is about 0.01.

< 564 older entriesHome212 newer entries >