Let's model a few situations in which the hearer does not assume that the speaker has full information about the topic of their utterance.
Goodman and Stuhlmüller (2013) consider a scenario in which a speaker wants to communicate how many of three apples are red. The hearer isn't sure whether the speaker has seen all the apples. Chapter 2 of problang.org gives two models of this scenario. The first makes very implausible predictions. The second is very complicated. Here's a simple model that gives the desired results.
var states = ['RRR','RRG','RGR','GRR','RGG','GRG','GGR','GGG'];
var meanings = {
'all': function(state) { return !state.includes('G') },
'some': function(state) { return state.includes('R') },
'none': function(state) { return !state.includes('R') },
'-': function(state) { return true }
}
var observation = function(state, access) {
return filter(function(s) {
return s.slice(0,access) == state.slice(0,access);
}, states);
}
var hearer0 = Agent({
credence: Indifferent(states),
kinematics: function(utterance) {
return function(state) {
return evaluate(meanings[utterance], state);
}
}
});
var speaker1 = function(obs) {
return Agent({
options: keys(meanings),
credence: update(Indifferent(states), obs),
utility: function(u,s){
return learn(hearer0, u).score(s);
}
});
};
showChoices(speaker1, [observation('RRR', 2), observation('GGG', 2)]);
Bergen, Levy, and Goodman (2016) assert that "the rational speech acts model, and neo-Gricean models more generally, cannot derive distinct pragmatic interpretations for semantically equivalent expressions".
In the previous post, I gave a counterexample. I presented an RSA model that explains why 'pockets' is interpreted as plural and 'a pocket' as singular, even though the two expressions are semantically equivalent.
In this post, we'll model different kinds of scalar implicature. I'll introduce several ideas and techniques that prove useful for other topics as well.
Let's begin with the textbook example, the inference from 'some' to 'not all' (for which Goodman and Stuhlmüller (2013) give an RSA-type explanation).
A speaker wants to communicate the results of an exam. The available utterances are 'all students passed', 'some students passed', and 'no students passed'; for short: 'all', 'some', and 'none'. We can represent their meaning as functions from states to truth values:
var states = ['∀', '∃¬∀', '¬∃'];
var meanings = {
'all': function(state) { return state == '∀' },
'some': function(state) { return state != '¬∃' },
'none': function(state) { return state == '¬∃' }
};
I've been playing around with the Rational Speech Act framework lately, and I want to write a few blog posts clarifying my thoughts. In this post, I'll introduce the framework and go through a simple application.
The guiding idea behind the Rational Speech Act framework is to model speakers and hearers as rational (Bayesian) agents who think strategically about each other's behaviour. A hearer doesn't just update on the literal content of an utterance, but on the fact that the utterance has been made, by a speaker who anticipated that the speaker would update in some such way.
In its purest form, this kind of reasoning leads to an infinite regress. To interpret your utterance, I need to figure out why you made it. To do that, I need to figure out how you thought I would interpret the utterance, which depends on what you believe about what I believe about why you made it, and so on.
Magri (2009) points out that the computation of scalar implicatures appears to be insensitive ("blind") to contextual knowledge. This is indicated by the oddness of sentences like (1) and (2):
(1) Some Italians come from a warm country.
(2) John is sometimes tall.
Plausibly, these sound odd because their implicature-strengthened meaning clashes with our background knowledge that – in the case of (1) – all Italians come from the same country and – for (2) – that people's height is a stable property.
Has it been noted that McGee conditionals seem to clash with the Simplification of Disjunctive Antecedents (SDA)?
Consider the following conditional, inspired by McGee (1985).
(1) If a Republican had won then if it hadn't been Reagan then it would have been Andersen.
For context, imagine a scenario in which there were exactly two Republican candidates for the office in question, called Reagan and Andersen. Neither won. In this kind of context, (1) seems fine. So does (2).
(2) If Reagan or Andersen had won then if Reagan hadn't won then Andersen would have won.
Now, SDA (in its strong form) is the hypothesis that a conditional of the form 'if A or B then C' is equivalent to the conjunction of 'if A then C' and 'if B then C'. Applying this to (2), we would predict that (2) is equivalent to the conjunction of (3) and (4).
It is well-known that disjunctive possibility and necessity statements appear to imply the possibility of the disjuncts:
(FC) \( \Diamond(p \lor q) \Rightarrow \Diamond p \land \Diamond q \).
(RP) \( \Box(p \lor q) \Rightarrow \Diamond p \land \Diamond q \).
The first kind of inference is known as a "free choice" inference, the second is "Ross's Paradox".
For example, (1a) seems to imply (1b) and (1c):
(1a) Alice might [or: must] have gone to the party or to the concert.
(1b) Alice might have gone to the party.
(1c) Alice might have gone to the concert.
In chapter 3 of his dissertation, Booth (2022), Richard Booth points out that (FC) and (RP) underdescribe the true effect.
In around 2009, I got interested in counterpart-theoretic interpretations of modal predicate logic. Lewis's original semantics, from Lewis (1968), has some undesirable features, due to his choice of giving the box a "strong" reading (in the sense of Kripke (1971)), but it's not hard to define a better-behaved form of counterpart semantics that gives the box its more familiar "weak" reading.
Wondering if anyone had figured out the logic determined by this semantics, I found an answer in Kutz (2000) and Kracht and Kutz (2002). I also learned that counterpart semantics seems to overcome some formal limitation of the more standard "Kripke semantics". For example, while all logics between quantified S4.3 and S5 are incomplete in Kripke semantics (as shown in Ghilardi (1991)), many are apparently complete in the "functor semantics" of Ghilardi (1992), which I do not understand but which is said to have a counterpart-theoretic flavour. Skvortsov and Shehtman (1993) present a somewhat more accessible "metaframe semantics", inspired by Ghilardi's approach, and claim that the quantified version of all canonical extensions of S4 remain canonical (and hence complete) in metaframe semantics. Kracht and Kutz argue that their – much simpler – counterpart semantics inherits these properties of functor and metaframe semantics.
I've been using github copilot for a while now to write philosophy and logic texts. It's definitely useful for more technical writing. Here you can see how it fills in a clause in a proof by induction:
Champollion, Ciardelli, and Zhang (2016) argue that truth-conditionally equivalent sentences can make different contributions to the truth-conditions of larger sentences in which they embed. This seems obviously true. 'There are infinitely many primes' and Fermat's Last Theorem are truth-conditionally equivalent, but 'I can prove that there are infinitely many primes' is true, while 'I can prove that there are no integers a, b, c, and n > 2 for which an + bn = cn' is false. Champollion, Ciardelli, and Zhang (henceforth, CCZ) have a more interesting case in mind. They argue that substituting logically equivalent sentences in the antecedent of a subjunctive conditional can make a difference to the conditional's truth-value.