This is part 2 of a series on epistemic counterpart semantics. Part 1 is here.
I want to defend what I called the "Quine-Kaplan model" of de re belief ascriptions. According to this model, 'S believes that x is F' is true iff there is a suitable role R such that (1) S believes that whatever plays R is F, and (2) in fact, x plays R.
In this post, I mainly want to explain what I mean by a "suitable role". This will also bring to light some arguments in favour of the Quine-Kaplan model.
I have decided to write a series of posts on epistemic applications of counterpart semantics, mostly to organise my own thoughts.
Let's start with a motivating example, from Sæbø 2015.
On September 14 2006, Mary Beth Harshbarger shot her husband, whom she had mistaken for a bear. At the trial, she "steadily maintained that she thought her husband was a black bear", as you can read on Wikipedia.
I drafted some blog posts on the pandemic over the last few months, but never got around to publish them. By now, almost everything I would want to say has been said better by others. Nonetheless, a few personal observations.
One thing that still puzzles me is why so few people saw this coming. When I read about the outbreak in Wuhan in late January, I bought hand santiser, toilet paper, and a short position on the German DAX index. Meanwhile, in the media, in governments, and in the stock market, the consensus seemed to be that there is no reason for concern or action. Not that I expected much from Donald Trump. But why didn't mainstream, sensible news sites raise the alarm? Why were almost all European governments so slow to react? Why did the stock market keep climbing almost all through February? Wasn't the risk plain to see?
Many sentences can be evaluated as true or false relative to a (possible) context. For example, 'it is raining' is true (in English) at all and only those possible contexts at which it is raining.
This relation between sentences (of a language) and contexts is arguably central to a theory of communication. At a first pass, what is communicated by an utterance of 'it is raining' is that the utterance context is among those at which the uttered sentence is true. (You can understand what is communicated without knowing where the utterance takes place.)
Another paper: "Discourse, Diversity, and Free Choice" has come out at the AJP.
This paper began as a couple of blog posts in January 2007, here and here. At the time, I was thinking about why counterfactuals with unspecific antecedents appear to imply counterfactuals with more specific antecedents. I noticed that a similar puzzle arises for possibility modals in general. My hunch was that this is a special kind of scalar implicature: if you say of a group of things (say, rooms) that they satisfy an unspecific predicate (like, having a size between 10 and 20 sqm), you implicate that different, more specific predicates, apply to different memebers of the group.
My paper "Ability and Possibility" has been published in Philosophers' Imprint. Here's the abstract:
According to the classical quantificational analysis of modals, an agent has the ability to perform an act iff (roughly) relevant facts about the agent and her environment are compatible with her performing the act. The analysis faces a number of problems, many of which can be traced to the fact that it takes even accidental performance of an act as proof of the relevant ability. I argue that ability statements are systematically ambiguous: on one reading, accidental performance really is enough; on another, more is required. The stronger notion of ability plays a central role in normative contexts. Both readings, I argue, can be captured within the classical quantificational framework, provided we allow conversational context to impose restrictions not just on the "accessible worlds" (the facts that are held fixed), but also on what counts as a performance of the relevant act among these worlds.
Teaching for this semester is finally over.
Last week I gave a talk in Umea at a workshop on singular thought. I was pleased
to be invited because I don't really understand singular thought. Giving
a talk, I hoped, would force me to have a closer look at the literature. But then I was
too busy teaching.
People seem to mean different things by 'singular thought'. The target of my
talk was the view that one can usefully understand the representational content
of beliefs and other intentional states as attributing properties to
individuals, without any intervening modes of presentation. This view is often
associated with a certain interpretation of attitude reports: whenever we can
truly say `S believes (or knows etc.) that A is F', where A is a name, then
supposedly the subject S stands in an interesting relation of belief (or
knowledge etc.) to a proposition directly involving the bearer of that name.
I've spent some time this summer upgrading my tree prover. The
new version is here. What's new:
- support for some (normal) modal logics
- better detection of invalid formulas
- faster proof search
- nicer user interface and nicer trees
- cleaner source
I hope there aren't too many new bugs. Let me know if you find one!
Suppose we want our decision theory to not impose strong constraints on
people's ultimate desires. You may value personal wealth, or you may value being
benevolent and wise. You may value being practically rational: you may value
maximizing expected utility. Or you may value not maximizing expected
utility.
This last possibility causes trouble.
If not maximizing expected utility is your only basic desire, and you
have perfect and certain information about your desires, then arguably (although
the argument isn't trivial) every choice in every decision situation you can
face has equal expected utility; so you are bound to maximize expected utility
no matter what. Your desire can't be fulfilled.
I'm generally happy with Causal Decision Theory. I think two-boxing is
clearly the right answer in Newcomb's problem, and I'm not impressed by any of
the alleged counterexamples to Causal Decision Theory that have been put
forward. But there's one thing I worry about. It is what exactly the theory
should say: how it should be spelled out.
Suppose you face a choice between two acts A and B. Loosely speaking, to
evaluate these options, we need to check whether the A-worlds are on average
better than the B-worlds, where the "average" is weighted by your credence on
the subjunctive supposition that you make the relevant choice. Even more
loosely, we want to know how good the world would be if you were to choose A,
and how good it would be if you were to choose B. So we need to know what else
would be the case if you were to choose, say, A.