< 729 older entriesHome52 newer entries >

On Smithies, Lennon, and Samuels on irrational belief

I've decided to write somewhat regular short pieces on interesting papers I've recently read. This one is about Smithies, Lennon, and Samuels (2022).

Smithies, Lennon, and Samuels (henceforth, SLS) criticise the view that there are a priori connections between having a belief with a certain content and other states that would be rational given this belief. A simple example of the target view says that believing P is being disposed to act in a way that would bring one closer to satisfying one's desires if P were true. A more complicated example of the target view, on which SLS focus, is Lewis's. According to Lewis, for a mental state to be a belief state with such-and-such content, the state must, under normal conditions, be connected in a certain way to behaviour, perceptual experiences, and other propositional attitudes. SLS deny this.

The subjective Bayesian answer to the problem of induction

Some people – important people, like Richard Jeffrey or Brian Skyrms – seem to believe that Laplace and de Finetti have solved the problem of induction, assuming nothing more than probabilism. I don't think that's true.

I'll try to explain what the alleged solution is, and why I'm not convinced. I'll pick Skyrms as my adversary, mainly because I've just read Skyrms and Diaconis's Ten Great Ideas about Chance, in which Skyrms presents the alleged solution in a somewhat accessible form.

The problem of metaphysical omniscience

There's a striking tension in Lewis's philosophy. His epistemology and philosophy of mind, on the one hand, leave no room for (non-trivial) a priori knowledge or a priori inquiry. Yet for most of his career, Lewis was engaged in just this kind of inquiry, wondering about the nature of causation, the ontology of sets, the extent of logical space, the existence of universals, and other non-contingent matters. My paper "The problem of metaphysical omniscience" explores some options for resolving the tension. The paper has just come out in a volume, Perspectives on the Philosophy of David K. Lewis, edited by Helen Beebee and A.R.J. Fisher.

Evidential externalism as an antidote to skepticism?

A popular idea in recent (formal) epistemology is that an externalist conception of evidence is somehow useful, or even required, to block the threat of skepticism. (See, for example, Das (2019), Das (2022), and Lasonen-Aarnio (2015). The trend was started by Williamson (2000).)

Negative exhaustification?

Here's an idea that might explain a number of puzzling linguistic phenomena, including neg-raising, the homogeneity presupposition triggered by plural definites, the difficulty of understanding nested negations, and the data often assumed to support conditional excluded middle.

An utterance of

(1a) We will not have PIZZA tonight

conveys two things. Unsurprisingly, it conveys that we will not have pizza tonight. But it also conveys, due to the focus on 'PIZZA', that we will have something else. By comparison,

Belief downloaders and epistemic bribes

Greaves (2013) describes a case in which adopting a single false belief would (supposedly) be rewarded by many true beliefs.

Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily's mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. […]

On neg-raising

Neg-raising occurs when asserting ¬Fp (or denying Fp) tends to communicate F¬p. For example, 'John doesn't believe that he will win' tends to communicate that John believes that he won't win.

There appears to be no consensus on why this happens. Some think ¬Fp really does entail F¬p. Others think the effect is an implicature. Still others think it's caused by a presupposition of opinionatedness or "settledness": when we talk about whether Fp holds, we presuppose that F holds either for p or for an alternative to p, denying Fp therefore commits us to F¬p.

Shangri La Variations

There are two paths to Shangri La. One goes by the sea, the other by the mountains. You are on the mountain path and about to enter Shangri La. You can choose how your belief state will change as you enter through the gate, in response to whatever evidence you may receive. At the moment, you are (rationally) confident that you have travelled by the mountains. You know that you will not receive any surprising new evidence as you step through the gate. You want to maximize the expected accuracy of your future belief state – at least with respect to the path you took. How should you plan to change your credence in the hypothesis that you have travelled by the mountains?

Evidential externalism and evidence of irrationality

Let Ep mean that your evidence entails p. Let an externalist scenario be a scenario in which either Ep holds without EEp or ¬Ep holds without E¬Ep.

It is sometimes assumed, for example in Gallow (2021) and Isaacs and Russell (2022), that any externalist scenario is a scenario in which you have evidence that you don't rationally respond to your evidence. On the face of it, this seems puzzling. Why should there be a connection between evidential externalism and evidence of irrationality? But the assumption actually makes sense.

Higher-order evidence and non-ideal rationality

I've read around a bit in the literature on higher-order evidence. Two different ideas seem to go with this label. One concerns the possibility of inadequately responding to one's evidence. The other concerns the possibility of having imperfect information about one's evidence. I have a similar reaction to both issues. I haven't seen it in the papers I've looked at. Pointers very welcome.

I'll begin with the first issue.

Let's assume that a rational agent proportions her beliefs to her evidence. This can be hard. For example, it's often hard to properly evaluate statistical data. Suppose you have evaluated the data, reached the correct conclusion, but now receive misleading evidence that you've made a mistake. How should you react?

Some (e.g. Christensen (2010)) say you should reduce your confidence in the conclusion you've reached. Others (e.g. Tal (2021)) say you should remain steadfast and not reduce your confidence.

< 729 older entriesHome52 newer entries >