< 722 older entriesHome60 newer entries >

Shangri La Variations

There are two paths to Shangri La. One goes by the sea, the other by the mountains. You are on the mountain path and about to enter Shangri La. You can choose how your belief state will change as you enter through the gate, in response to whatever evidence you may receive. At the moment, you are (rationally) confident that you have travelled by the mountains. You know that you will not receive any surprising new evidence as you step through the gate. You want to maximize the expected accuracy of your future belief state – at least with respect to the path you took. How should you plan to change your credence in the hypothesis that you have travelled by the mountains?

Evidential externalism and evidence of irrationality

Let Ep mean that your evidence entails p. Let an externalist scenario be a scenario in which either Ep holds without EEp or ¬Ep holds without E¬Ep.

It is sometimes assumed, for example in Gallow (2021) and Isaacs and Russell (2022), that any externalist scenario is a scenario in which you have evidence that you don't rationally respond to your evidence. On the face of it, this seems puzzling. Why should there be a connection between evidential externalism and evidence of irrationality? But the assumption actually makes sense.

Higher-order evidence and non-ideal rationality

I've read around a bit in the literature on higher-order evidence. Two different ideas seem to go with this label. One concerns the possibility of inadequately responding to one's evidence. The other concerns the possibility of having imperfect information about one's evidence. I have a similar reaction to both issues. I haven't seen it in the papers I've looked at. Pointers very welcome.

I'll begin with the first issue.

Let's assume that a rational agent proportions her beliefs to her evidence. This can be hard. For example, it's often hard to properly evaluate statistical data. Suppose you have evaluated the data, reached the correct conclusion, but now receive misleading evidence that you've made a mistake. How should you react?

Some (e.g. Christensen (2010)) say you should reduce your confidence in the conclusion you've reached. Others (e.g. Tal (2021)) say you should remain steadfast and not reduce your confidence.

Lecture notes on modal logic

I've been teaching a course called Logic 2: Modal Logics for the past few years. It's an intermediate logic course for third-year Philosophy students, all of whom have taken intro logic. I'm not entirely convinced that a second logic course should focus on modal logic, but it works OK.

One nice aspect of modal propositional logic is that models, proofs, soundness, completeness, etc. are not as trivial as in classical propositional logic, but easier than in classical predicate logic. I also like the many philosophical applications. I spend a week on epistemic logic, another on deontic logic, one on temporal logic, and one on conditionals.

Anyway, I've just uploaded my lecture notes to github, in case anyone is interested. The LaTeX source is there as well.

Self-locating priors, primary intensions, and cosmological measures

If a certain hypothesis entails that N percent of all observers in the universe have a certain property, how likely is it that we have that property – conditional on the hypothesis, and assuming we have no other relevant information?

Answer: It depends on what else the hypothesis says. If, for example, the hypothesis says that 90 percent of all observers have three eyes, and also that we ourselves have two eyes, then the probability that we have three eyes conditional on the hypothesis is zero.

This effect is easy to miss because many hypotheses that appear to be just about the universe as a whole secretly contain special information about us. Consider the following passage from Carroll (2010), cited in Arntzenius and Dorr (2017):

An alternative model of permissivism about epistemic risk

In the previous post I argued that rational priors must favour some possibilities over others, and that this is a problem for Richard Pettigrew's model of Jamesian permissivism. It also points towards an alternative model that might be worth exploring.

I claim that, in the absence of unusual evidence, a rational agent should be confident that observed patterns continue in the unobserved part of the world, that witnesses tell the truth, that rain experiences indicate rain, and so on. In short, they should give low credence to various skeptical scenarios. How low? Arguably, our epistemic norms don't fix a unique and precise answer.

Pettigrew on epistemic risk and the demands of rationality

Pettigrew (2021) defends a type of permissivism about rational credence inspired by James (1897), on which different rational priors reflect different attitudes towards epistemic risk. I'll summarise the main ideas and raise some worries.

(There is, of course, much more in the book than what I will summarise, including many interesting technical results and some insightful responses to anti-permissivist arguments.)

Mackay on counterfactual epistemic scenarios

An interesting new paper by David Mackay, Mackay (2022), raises a challenge to popular ideas about the semantics of modals. Mackay presents some data that look incompatible with classical two-dimensional semantics. But the data nicely fit classical two-dimensionalism, if we combine that with a flexible form of counterpart semantics.

Before I discuss the data, here's a reminder of some differences between epistemic modals and non-epistemic ("metaphysical") modals.

Decreasing accuracy through learning

Last week I gave a talk in which I claimed (as an aside) that if you update your credences by conditionalising on a true proposition then your credences never become more inaccurate. That seemed obviously true to me. Today I tried to quickly prove it. I couldn't. Instead I found that the claim is false, at least on popular measures of accuracy.

The problem is that conditionalising on a true proposition typically increases the probability of true propositions as well as false propositions. If we measure the inaccuracy of a credence function by adding up an inaccuracy score for each proposition, the net effect is sensitive to how exactly that score is computed.

Sobel's strictly causal decision theory

In Jordan Howard Sobel's papers on decision theory, he generally defines the (causal) expected utility of an act in terms of a special conditional that he calls "causal" or "practical". Concretely, he suggests that

\[ (1)\quad EU(A) = \sum_{w} Cr(A\; \Box\!\!\to w)V(w), \]

where 'A □→ B' is the special conditional that is true iff either (i) B is the case and would remain the case if A were the case, or (ii) B is not the case but would be the case as a causal consequence of A if A were the case (see e.g. Sobel (1986), pp.152f., or Sobel (1989), pp.175f.).

< 722 older entriesHome60 newer entries >