< 581 older entriesHome200 newer entries >

Against countable additivity

Imagine the universe has a centre that regularly produces new stars which then drift away at a constant speed. This has been going on forever, so there are infinitely many stars. We can label them by age, or equivalently by their distance from the centre: star 1 is the youngest, then comes star 2, then star 3, and so on, without end. The stars in turn produce planets at regular intervals. So the older a star, the more planets surround it. Today, something happened to one (and only one) of the planets. Let's say it exploded. Given all this, what is your credence that the unfortunate planet belonged to the first 100 stars? What about the second 100? It would be odd to think that the event is more likely to have happened at one of the first 100 stars than at one of the next 100, since the latter have far more planets. Similarly if we compare the first 1000 stars with the next 1000, or the first million with the next million, and so on. But there is no countably additive (real-valued) probability measure that satisfies this constraint.

Conditional chance and rational credence

Two initially plausible claims:

  1. Sometimes, a possible chance function conditionalized on a proposition A yields another possible chance function.
  2. Any rational prior credence function Cr conditional on the hypothesis Ch=f that f is the (actual, present) chance function should coincide with f; i.e., Cr(A / Ch=f) = f(A) for all A (provided that Cr(Ch=f)>0).

Claim 1 is a supported by the popular idea that chances evolve by conditionalizing on history, so that the chance at time t2 equals the chance at t1 conditional on the history of events between t1 and t2. Claim 2 is a weak form of the Principal Principle and often taken to be a defining feature of chance.

Moving to Germany

Inga got a postdoc in Hamburg, so it looks like we'll be moving back to Germany at the end of the year. It's sad to leave the ANU, but we'll probably return here for at least a few months in 2014. (If only because I don't have another job yet.)

Second-order logic and Newman's problem

How much can you say about the world in purely logical terms? In first-order logic with identity, one can construct formulas like '(Ex)(Ey)~(x=y)'. But arguably, this doesn't yet mean anything. As we learned in intro logic, formulas of first-order logic have no fixed interpretation; they mean something only once we provide a domain of quantification and an assignment of values to predicate and function symbols. As it happens, '(Ex)(Ey)~(x=y)' doesn't contain any non-logical predicate and function symbols, so to make it mean anything we just need to specify a domain of quantification. For example, if the domain is the class of Western black rhinos, then the formula says that there are at least two Western black rhinos.

The input problem for Jeffrey conditioning

You can't predict the stock market by looking at tea leaves. If an episode of looking at tea leaves makes you believe that the stock market will soon collapse, then -- assuming your previous beliefs did not support the collapse hypothesis, nor the hypothesis that tea leaves predict the stock market -- your new belief is unjustified and irrational. So there are epistemic norms for how one's opinions may change through perceptual experience.

Such norms are easily accounted for in the traditional Bayesian picture where each perceptual experience is associated with an evidence proposition E on which any rational agent should condition when they have the experience. But what if perceptual experiences don't confer absolute certainty on anything? Jeffrey pointed out that if there is a partition of propositions { E_i } = E_1,...,E_n such that (1) an experience changes their probabilities to some values { p_i } = p_1,...,p_n, and (2) the experience does not affect the probabilities conditional on any member of the partition, then the new probability assigned to any proposition A is the weighted average of the old probability conditional on the members of the partition, weighted by the new probability of that partition. This rule is often called "Jeffrey conditioning" and sometimes "generalised conditioning", but unlike standard conditioning it isn't a dynamical rule at all: it is a simple consequence of the probability calculus. To get genuine epistemic norms on the dynamics of belief through perceptual experience, Jeffrey's rule must be supplemented with a story about how a given experience, perhaps together with an agent's previous belief state, may fix the partition { E_i } and values { p_i } that determine a Jeffrey update. This is the "input problem" for Jeffrey conditioning.

Bayes factors

Suppose a rational agent makes an observation, which changes the subjective probability she assigns to a hypothesis H. In this case, the new probability of H is usually sensitive to both the observation and the prior probability. Can we factor our the prior probability to get a measure of how the experience bears on the probability of H, independently of the prior probability?

A common answer, going back to Alan Turing and I.J.Good, is to use Bayes factors. The Bayes factor B(H) for H is the ratio (P'(H)/P'(not-H))/(P(H)/P(not-H)) of new odds on H to old odds. Thus the new odds on H are the old odds multiplied by the Bayes factor. For example, if the prior credence in H was 0.25 and the posterior is 0.5, then the odds on H changed from 1:3 to 1:1, and so the Bayes factor of the update is 3. The same Bayes factor would characterise an update from probability 0.01 to about 0.03 (odds 1:99 to 1:33) or from 0.9 to about 0.96 (odds 9:1 to 27:1).

Ninan on imagination and multi-centred worlds

Dilip Ninan has also argued on a number of occasions that attitude contents cannot in general be modelled by sets of qualitative centred worlds; see especially his "Counterfactual attitudes and multi-centered worlds" (2012). The argument is based on an alleged problem for the centred-worlds account applied to what he calls "counterfactual attitudes", the prime example being imagination.

Since the problem concerns the analysis of attitudes de re, we first have to briefly review what the centred-worlds account might say about this. Consider a de re belief report "x believes that y is F". Whether this is true depends on what x believes about y, but if belief contents are qualitative, we cannot simply check whether y is F in x's belief worlds. We first have to locate y in these qualitative scenarios. A standard idea, going back to Quine, Kaplan and Lewis, is that the belief report is true iff there is some "acquaintance relation" Q such that (i) x is Q-related uniquely to y and (ii) in x's belief worlds, the individual at the centre is Q-related to an individual that is F. For example, if Ralph sees Ortcutt sneaking around the waterfront, and believes that the guy sneaking around the waterfront is a spy, then Ralph believes de re of Ortcutt that he is a spy.

Austin and Chalmers on two tubes cases

If we want to model rational degrees of belief as probabilities, the objects of belief should form a Boolean algebra. Let's call the elements of this algebra propositions and its atoms (or ultrafilters) worlds. Every proposition can be represented as a set of worlds. But what are these worlds? For many applications, they can't be qualitative possibilities about the universe as a whole, since this would not allow us to model de se beliefs. A popular response is to identify the worlds with triples of a possible universe, a time and an individual. I prefer to say that they are maximally specific properties, or ways a thing might be. David Chalmers (in discussion, and in various papers, e.g. here and there) objects that these accounts are not fine-grained enough, as revealed by David Austin's "two tubes" scenario. Let's see.

The puzzle of the hats

Luc Bovens and Wlodek Rabinowicz (2010 and 2011) present the following puzzle:

Three people are each given a hat to put on in the dark. The hats' colours, either black or white, has been decided by three independent tosses of a fair coin. Then the light goes on and everyone can see the hats of the two others, but not their own. All of this is common knowledge in the group.

Let's call the three players X, Y and Z. There are eight possible distributions of hat colours, each with probability 1/8:

New server

I had to move to a new server, hence the recent downtime. If you notice something that's broken, please let me know.

< 581 older entriesHome200 newer entries >