< 680 older entriesHome101 newer entries >

Epistemic counterpart semantics

I have decided to write a series of posts on epistemic applications of counterpart semantics, mostly to organise my own thoughts.

Let's start with a motivating example, from Sæbø 2015.

On September 14 2006, Mary Beth Harshbarger shot her husband, whom she had mistaken for a bear. At the trial, she "steadily maintained that she thought her husband was a black bear", as you can read on Wikipedia.

Covid-19

I drafted some blog posts on the pandemic over the last few months, but never got around to publish them. By now, almost everything I would want to say has been said better by others. Nonetheless, a few personal observations.

One thing that still puzzles me is why so few people saw this coming. When I read about the outbreak in Wuhan in late January, I bought hand santiser, toilet paper, and a short position on the German DAX index. Meanwhile, in the media, in governments, and in the stock market, the consensus seemed to be that there is no reason for concern or action. Not that I expected much from Donald Trump. But why didn't mainstream, sensible news sites raise the alarm? Why were almost all European governments so slow to react? Why did the stock market keep climbing almost all through February? Wasn't the risk plain to see?

Expanding contexts

Many sentences can be evaluated as true or false relative to a (possible) context. For example, 'it is raining' is true (in English) at all and only those possible contexts at which it is raining.

This relation between sentences (of a language) and contexts is arguably central to a theory of communication. At a first pass, what is communicated by an utterance of 'it is raining' is that the utterance context is among those at which the uttered sentence is true. (You can understand what is communicated without knowing where the utterance takes place.)

Discourse, Diversity, and Free Choice

Another paper: "Discourse, Diversity, and Free Choice" has come out at the AJP.

This paper began as a couple of blog posts in January 2007, here and here. At the time, I was thinking about why counterfactuals with unspecific antecedents appear to imply counterfactuals with more specific antecedents. I noticed that a similar puzzle arises for possibility modals in general. My hunch was that this is a special kind of scalar implicature: if you say of a group of things (say, rooms) that they satisfy an unspecific predicate (like, having a size between 10 and 20 sqm), you implicate that different, more specific predicates, apply to different memebers of the group.

Ability and Possibility

My paper "Ability and Possibility" has been published in Philosophers' Imprint. Here's the abstract:

According to the classical quantificational analysis of modals, an agent has the ability to perform an act iff (roughly) relevant facts about the agent and her environment are compatible with her performing the act. The analysis faces a number of problems, many of which can be traced to the fact that it takes even accidental performance of an act as proof of the relevant ability. I argue that ability statements are systematically ambiguous: on one reading, accidental performance really is enough; on another, more is required. The stronger notion of ability plays a central role in normative contexts. Both readings, I argue, can be captured within the classical quantificational framework, provided we allow conversational context to impose restrictions not just on the "accessible worlds" (the facts that are held fixed), but also on what counts as a performance of the relevant act among these worlds.

No evidence for singular thought

Teaching for this semester is finally over.

Last week I gave a talk in Umea at a workshop on singular thought. I was pleased to be invited because I don't really understand singular thought. Giving a talk, I hoped, would force me to have a closer look at the literature. But then I was too busy teaching.

People seem to mean different things by 'singular thought'. The target of my talk was the view that one can usefully understand the representational content of beliefs and other intentional states as attributing properties to individuals, without any intervening modes of presentation. This view is often associated with a certain interpretation of attitude reports: whenever we can truly say `S believes (or knows etc.) that A is F', where A is a name, then supposedly the subject S stands in an interesting relation of belief (or knowledge etc.) to a proposition directly involving the bearer of that name.

Tree prover update

I've spent some time this summer upgrading my tree prover. The new version is here. What's new:

  • support for some (normal) modal logics
  • better detection of invalid formulas
  • faster proof search
  • nicer user interface and nicer trees
  • cleaner source

I hope there aren't too many new bugs. Let me know if you find one!

A desire that thwarts decision theory

Suppose we want our decision theory to not impose strong constraints on people's ultimate desires. You may value personal wealth, or you may value being benevolent and wise. You may value being practically rational: you may value maximizing expected utility. Or you may value not maximizing expected utility.

This last possibility causes trouble.

If not maximizing expected utility is your only basic desire, and you have perfect and certain information about your desires, then arguably (although the argument isn't trivial) every choice in every decision situation you can face has equal expected utility; so you are bound to maximize expected utility no matter what. Your desire can't be fulfilled.

Why would you do that?

I'm generally happy with Causal Decision Theory. I think two-boxing is clearly the right answer in Newcomb's problem, and I'm not impressed by any of the alleged counterexamples to Causal Decision Theory that have been put forward. But there's one thing I worry about. It is what exactly the theory should say: how it should be spelled out.

Suppose you face a choice between two acts A and B. Loosely speaking, to evaluate these options, we need to check whether the A-worlds are on average better than the B-worlds, where the "average" is weighted by your credence on the subjunctive supposition that you make the relevant choice. Even more loosely, we want to know how good the world would be if you were to choose A, and how good it would be if you were to choose B. So we need to know what else would be the case if you were to choose, say, A.

Objects of revealed preference

A common assumption in economics is that utilities are reducible to choice dispositions. The story goes something like this. Suppose we know what an agent would choose if she were asked to pick one from a range of goods. If the agent is disposed to choose X, and Y was an available alternative, we say that the agent prefers X over Y. One can show that if the agent's choice dispositions satisfy certain formal constraints, then they are "representable" by a utility function in the sense that whenever the agent prefers X over Y, the function assigns greater value to X than to Y. This utility function is assumed to be the agent's true utility function, telling us how much the agent values the relevant goods.

< 680 older entriesHome101 newer entries >