< 770 older entriesHome11 newer entries >

A fork time puzzle

According to a popular view about counterfactuals, a counterfactual hypothesis 'if A had happened…' shifts the world of evaluation to worlds that are much like the actual world until shortly before the time of A, at which point they start to deviate from the actual world in a minimal way that allows A to happen. 'If A had happened, C would have happened' is true iff all such worlds are C worlds. The time "shortly before A" when the worlds start to deviate is the fork time.

Now remember the case of Pollock's coat (introduced in Nute (1980)). John Pollock considered 'if my coat had been stolen last night…'. He stipulates that there were two occasions on which the coat could have been stolen. By the standards of Lewis (1979), worlds where it was stolen on the second occasion are more similar to the actual world than worlds where it was stolen on the first occasion. Lewis's similarity semantics therefore predicts that if the coat had been stolen, it would have been stolen on the second occasion. This doesn't seem right.

Reasoning about doom

I occasionally teach the doomsday argument in my philosophy classes, with the hope of raising some general questions about self-locating priors. Unfortunately, the usual formulations of the argument are problematic in so many ways that it's hard to get to these questions.

Let's look at Nick Bostrom's version of the argument, as presented for example in Bostrom (2008).

An RSA model of SDA

In this post, I'll develop an RSA model that explains why 'if A or B then C' is usually taken to imply 'if A then C' and 'if B then C', even if the conditional has a Lewis/Stalnaker ("similarity") semantics, where the inference is invalid.

I'll write 'A>C' for the conditional 'if A then C'. For the purposes of this post, we assume that 'A>C' is true at a world w iff all the closest A worlds to w are C worlds, by some contextually fixed measure of closeness.

It has often been observed that the simplification effect resembles the "Free Choice" effect, i.e., the apparent entailment of '◇A' and '◇B' by '◇(A∨B)', where the diamond is a possibility modal (permission, in the standard example). But there are also important differences.

An RSA model of free choice

Let's continue. I'm going to present a new (?) model of free choice. Free choice is the phenomenon that a disjunction embedded in a possibility modal conveys the possibility of both disjuncts. 'You may have tea or coffee', for example, conveys that you may have tea and you may have coffee. Champollion, Alsop, and Grosu (2019) present an RSA model of this effect, drawing on the "lexical uncertainty" account from Bergen, Levy, and Goodman (2016). I'll present a model that does not rely on lexical uncertainty.

RSA vs IBR

In this post, I want to compare the Rational Speech Act approach with the Iterated Best Response approach of Franke (2011). I'm also going to discuss Franke's IBR model of Free Choice, turn it into an RSA model, and explain why I find both unconvincing.

Let's back up a little.

Lewis (1969) argued that linguistic conventions solve a game-theoretic coordination problem.

RSA models with partially informed speakers

Let's model a few situations in which the hearer does not assume that the speaker has full information about the topic of their utterance.

Goodman and Stuhlmüller (2013) consider a scenario in which a speaker wants to communicate how many of three apples are red. The hearer isn't sure whether the speaker has seen all the apples. Chapter 2 of problang.org gives two models of this scenario. The first makes very implausible predictions. The second is very complicated. Here's a simple model that gives the desired results.

var states = ['RRR','RRG','RGR','GRR','RGG','GRG','GGR','GGG'];
var meanings = {
  'all': function(state) { return !state.includes('G') },
  'some': function(state) { return state.includes('R') },
  'none': function(state) { return !state.includes('R') },
  '-': function(state) { return true }
}
var observation = function(state, access) {
    return filter(function(s) {
        return s.slice(0,access) == state.slice(0,access);
    }, states);
}
var hearer0 = Agent({
    credence: Indifferent(states),
    kinematics: function(utterance) {
        return function(state) {
            return evaluate(meanings[utterance], state);
        }
    }
});
var speaker1 = function(obs) {
    return Agent({
        options: keys(meanings),
        credence: update(Indifferent(states), obs),
        utility: function(u,s){
            return learn(hearer0, u).score(s);
        }
    });
};
showChoices(speaker1, [observation('RRR', 2), observation('GGG', 2)]);

Pragmatic non-equivalence despite semantic equivalence

Bergen, Levy, and Goodman (2016) assert that "the rational speech acts model, and neo-Gricean models more generally, cannot derive distinct pragmatic interpretations for semantically equivalent expressions".

In the previous post, I gave a counterexample. I presented an RSA model that explains why 'pockets' is interpreted as plural and 'a pocket' as singular, even though the two expressions are semantically equivalent.

RSA models of scalar implicature

In this post, we'll model different kinds of scalar implicature. I'll introduce several ideas and techniques that prove useful for other topics as well.

Let's begin with the textbook example, the inference from 'some' to 'not all' (for which Goodman and Stuhlmüller (2013) give an RSA-type explanation).

A speaker wants to communicate the results of an exam. The available utterances are 'all students passed', 'some students passed', and 'no students passed'; for short: 'all', 'some', and 'none'. We can represent their meaning as functions from states to truth values:

var states = ['∀', '∃¬∀', '¬∃'];
var meanings = {
    'all': function(state) { return state == '∀' },
    'some': function(state) { return state != '¬∃' },
    'none': function(state) { return state == '¬∃' }
};

Exploring the Rational Speech Act framework

I've been playing around with the Rational Speech Act framework lately, and I want to write a few blog posts clarifying my thoughts. In this post, I'll introduce the framework and go through a simple application.

The guiding idea behind the Rational Speech Act framework is to model speakers and hearers as rational (Bayesian) agents who think strategically about each other's behaviour. A hearer doesn't just update on the literal content of an utterance, but on the fact that the utterance has been made, by a speaker who anticipated that the speaker would update in some such way.

In its purest form, this kind of reasoning leads to an infinite regress. To interpret your utterance, I need to figure out why you made it. To do that, I need to figure out how you thought I would interpret the utterance, which depends on what you believe about what I believe about why you made it, and so on.

The semantic blindness of scalar implicatures

Magri (2009) points out that the computation of scalar implicatures appears to be insensitive ("blind") to contextual knowledge. This is indicated by the oddness of sentences like (1) and (2):

(1) Some Italians come from a warm country.
(2) John is sometimes tall.

Plausibly, these sound odd because their implicature-strengthened meaning clashes with our background knowledge that – in the case of (1) – all Italians come from the same country and – for (2) – that people's height is a stable property.

< 770 older entriesHome11 newer entries >