Baccelli and Mongin (and others) on redescribing the outcomes
There are many alleged counterexamples to expected utility theory: Allais's Paradox, Ellsberg's Paradox, Sen (1993)'s polite agent who prefers the second-largest slice of cake, Machina (1989)'s mother who prefers fairness when giving a treat to her children, and so on. In all these cases, the preferences of seemingly reasonable people appear not to rank the options by their expected utility.
Those who make these claims generally assume that utility is a function of material goods. In Allais's Paradox, for example, the possible "outcomes" (of which utility is a function) are assumed to be amounts of money. As has often been pointed out, the apparent violations of expected utility theory all go away if the outcomes are individuated more finely – if, for example, we distinguish between an outcome of getting $1000 as the result of a risky gamble and an outcome of getting a sure $1000. See, for example, Weirich (1986), or Dreier (1996).
The view that many apparent counterexamples to the expected utility model can be explained away by redescribing the outcomes is common in philosophy, but very uncommon everywhere else.
I think the disagreement is largely verbal. The two camps are interested in different topics, and they mean different things by 'utility'.
Economists would like to establish certain regularities in consumer behaviour that allow deriving laws of supply, demand, variable proportion, etc. One can derive such laws if one assumes that people assign a certain "utility" to material goods – irrespective of whether the goods are already owned, how they are acquired, and so on – and that their choices maximize the expectation of this "utility" function. (According to the revealed preference doctrine, this assumption is equivalent to assuming that people have preferences over lotteries (or gambles) involving material goods that conform to certain axioms.)
In fact, of course, people don't just care about material goods. We care about politeness, safety, fairness, predictability, feelings of excitement or regret, and so on. Hence the apparent counterexamples to "expected utility theory".
When we philosophers talk about expected utility, we are (for the most part) interested in a high-level, general theory about how one should act in order to promote some goals in light of some information, without putting substantive constraints on the relevant goals. When we talk about an agent's utility function, we assume that the function represents everything the agent ultimately cares about. Utility, on this usage, isn't a function of material goods.
Our expected utility model is much weaker than the economists' model. On its own, it doesn't make any predictions about consumer behaviour, or about any other kind of behaviour. Any behaviour whatsoever can be made to conform to the philosophical EU model. This is as it should be. For any behaviour, one can imagine an agent whose only goal is to display just that behaviour. Displaying the behaviour is surely a good way of promoting the goal.
Some, however, have argued that it is wrong to redescribe the outcomes. Lara Buchak argues against what she calls "global redescriptions" in chapter 4 of Buchak (2013). Jean Baccelli and Phillippe Mongin argue against all kinds of redescription in Mongin and Baccelli (2020) and Baccelli and Mongin (2021).
Buchak and Baccelli and Mongin all complain that unconstrained redescription would trivialise the expected utility model. In this context, they discuss some proposed limits on allowable redescriptions.
Broome (1991, 103), for example, suggests that "outcomes should be distinguished as different iff they differ in a way that makes it rational to have a preference between them". Gustafsson (2022, 14) similarly suggests that "outcomes x and y should be treated as the same if and only if it is rationally required to be indifferent between the sure prospects of x and y".
I disagree.
Suppose we judge that rationality requires indifference between x and y. We can nonetheless imagine an agent who (irrationally) prefers x to y. When given a straight choice between x and y, the agent would choose x. She would pay to get x rather than y. And so on. According to Broome and Gustaffson, this person's utility function assigns the same value to x and y, simply because utility functions are defined so that they can't distinguish between x and y. According to Broome and Gustaffson, our agent violates the expected utility norm when she chooses x over y.
I don't think this is a useful description of the scenario. Our agent isn't acting against her desires. She does exactly what she ought to do in light of her desire for x over y. What we should say is that she assigns greater utility to y than to x, that her choice of x is in line with the EU norm, but that her desire for y is irrational.
So I don't think we should put any non-trivial constraints on redescriptions.
Baccelli and Mongin complain that this makes the redescription strategy ad hoc and unsystematic. But this isn't true.
Every systematic EU model should have something to say about how the "outcomes" are individuated. One attractive idea, adopted for example in Lewis (1981), is to identify outcomes with "value-level propositions", assuming the theory of value developed in Jeffrey (1965). Another natural idea, popular in ethics, is that outcomes are complete worlds: the outcome of an act in a particular decision situation comprises everything that would be the case if the act were chosen.
Either way, it is guaranteed that an outcome specifies everything that matters to the agent, even if the agent has unreasonable goals. By contrast, it is really not obvious at all why one might think that merely specifying material goods is a useful representation of an outcome.
We shouldn't assume that the material-goods individuation is somehow correct by default, and that everyone else is "redescribing".
Baccelli and Mongin also complain that the redescription strategy is "theoretically insulated" from non-EU models that are tailored to specific violations of standard EU theory, as understood outside philosophy. Some such models specifically allow an agent to care about politeness. Others allow agents to care about regret. Others allow them to care about "ambiguity". One might add Buchak's "risk-weighted EU theory" that allows agents to care about risk (in a highly constrained manner).
As a philosopher who is interested in the EU model as a general model of the connection between goals, beliefs, and behaviour, I really don't take much interest in these developments. All these models are far too restrictive in what kinds of goals they allow for.
But that's the point. I'm interested in a different topic. I don't want to derive the law of demand from a predictively powerful model of consumer behaviour.
If this were my topic, I think I would try to develop models that keep the simple EU rule and instead constrain the utility function. Pettigrew (2015) shows how Buchak's model can be re-phrased in this manner. Mongin and Baccelli (2020) say that it would be "utterly unpractical" to develop models of regret along these lines. Perhaps they are right. Perhaps it really proves more convenient, for some applications, to stick with a "utility" function that's only defined over material goods and revise the EU rule.
It's unfortunate that the same labels – 'decision theory', 'expected utility theory' – are used for these different projects.