< 619 older entriesHome162 newer entries >

Belief update: shifting, pushing, and pulling

It is widely agreed that conditionalization is not an adequate norm for the dynamics of self-locating beliefs. There is no agreement on what the right norms should look like. Many hold that there are no dynamic norms on self-locating beliefs at all. On that view, an agent's self-locating beliefs at any time are determined on the basis of the agent's evidence at that time, irrespective of the earlier self-locating belief. I want to talk about an alternative approach that assumes a non-trivial dynamics for self-locating beliefs. The rough idea is that as time goes by, a belief that it is Sunday should somehow turn into a belief that it is Monday.

Functionalism and the nature of propositions

Let's assume that propositional attitudes are not metaphysically fundamental: if someone has such-and-such beliefs and desires, that is always due to other, more basic, and ultimately non-intentional facts. In terms of supervenience: once all non-intentional facts are settled, all intentional facts are settled as well.

Then how are propositional attitudes grounded in non-intentional facts? A promising approach is to identify a characteristic "functional role" of propositional attitudes and then explain facts about propositional attitudes in terms of facts about the realization of that role. (We could also identify the attitude with the realizer, or with the higher-order property of heaving a realizer, but that's optional.)

Sleeping Beauty is testing a hypothesis

Let's look at the third type of case in which credences can come apart from known chances. Consider the following variation of the Sleeping Beauty problem (a.k.a. "The Absentminded Driver"):

Before Sleeping Beauty awakens on Monday, a coin is tossed. If the coin lands tails, Beauty's memories of Monday will be erased the following night, and the coin will be tossed again on Tuesday. If the Monday toss lands heads, no memory erasure or further tosses take place. Beauty is aware of all these facts.

When Beauty awakens on Monday morning and learns that today's toss has landed tails (alternatively: that the Monday toss has landed tails), how should that affect her credence in the hypothesis that the coin is fair?

Undermining and confirmation

Next, undermining. Suppose we are testing a model H according to which the probability that a certain type of coin toss results in heads is 1/2. On some accounts of physical probability, including frequency accounts and "best system" accounts, the truth of H is incompatible with the hypothesis that all tosses of the relevant type in fact result in heads. So we get a counterexample to simple formulations of the Principal Principle: on the assumption that H is true, we know that the outcomes can't be all-heads, even though H assigns positive probability to all-heads. In such a case, we say that all-heads is undermining for H.

Inadmissible evidence in Bayesian Confirmation Theory

Suppose we are testing statistical models of some physical process -- a certain type of coin toss, say. One of the models in question holds that the probability of heads on each toss is 1/2; another holds that the probability is 1/4. We set up a long run of trials and observe about 50 percent heads. One would hope that this confirms the model according to which the probability of heads is 1/2 over the alternative.

(Subjective) Bayesian confirmation theory says that some evidence E supports some hypothesis H for some agent to the extent that the agent's rational credence C in the hypothesis is increased by the evidence, so that C(H/E) > C(H). We can now verify that observation of 500 heads strongly confirms that the coin is fair, as follows.

Conditional expressions

Most programming languages have conditional operators that combine a (boolean) condition and two singular terms into a singular term. For example, in Python the expression

'hi' if 2 < 7 else 'hello'

is a singular term whose value is the string 'hi' (because 2 < 7). In general, the expression

x if p else y

denotes x in case p is true and otherwise y. So, for example,

Evidentialism and time-slice epistemology

Time-slice epistemology is the idea that epistemic norms are history-independent: whether an agent at a time satisfies an epistemic norm is always determined by the agent's state at that time, irrespective of the agent's earlier states.

One motivation for time-slice epistemology is a kind of internalism, the intuition that agents should not be epistemically constrained by things that are not "accessible" at the relevant time. Plausibly, an agent's earlier beliefs are not always accessible in the relevant sense. If yesterday you learned that yew berries are poisonous but since then forgot that piece of information, it seems odd to demand that your current beliefs and actions should nevertheless be constrained by the lost information.

The broken duplication machine

Fred has bought a duplication machine at a discount from a series in which 50 percent of all machines are broken. If Fred's machine works, it will turn Fred into two identical copies of himself, one emerging on the left, the other on the right. If Fred's machine is broken, he will emerge unchanged and unduplicated either on the left or on the right, but he can't predict where. Fred enters his machine, briefly loses consciousness and then finds himself emerge on the left. In fact, his machine is broken and no duplication event has occurred, but Fred's experiences do not reveal this to him.

Evidentialism, conservatism, skepticism

An evil scientist might have built a brain in vat that has all the experiences you currently have. On the basis of your experiences, you cannot rule out being that brain in a vat. But you can rule out being that scientist. In fact, being that scientist is not a skeptical scenario at all. For example, if the scientist in question suspects that she is a scientist building a brain in a vat, then that would not constitute a skeptical attitude.

Preference and the Principal Principle

Decision theoretic representation theorems show that one can read off an agent's probability and utility functions from their preferences, provided the latter satisfy certain minimal rationality constraints. More substantive rationality constraints should therefore translate into further constraints on preference. What do these constraints look like?

Here are a few steps towards an answer for one particular constraint: a simple form of the Principal Principle. The Principle states that if cr is a rational credence function and ch=p is the hypothesis that p is the chance function, then for any E in the domain of p,

< 619 older entriesHome162 newer entries >