< 788 older entriesHome

Morality is global

A strange aspect of the literature on metaethics is that most of it sees morality as a local phenomenon, located in specific acts or events (or people or outcomes). I guess this goes back to G.E. Moore, who asked what it means to call something 'good'.

That's not how I think of morality. The basic moral facts are global. They don't pertain to specific acts or events.

Here, morality contrasts with, say, phenomenal consciousness. Some creatures (in some states) are phenomenally conscious, others are not. Intuitively, this is a basic fact about the relevant creatures. Hence it makes sense to wonder whether one creature is conscious and another isn't, even if we know that they are alike in other respects. With moral properties, this doesn't make sense. If two events are alike descriptively, they must be alike morally.

Santorio on being neither able nor unable

Some ability statements sound wrong when affirmed but also when denied. Santorio (2024) proposes a new semantics that's built around this observation.

Suppose Ava is a mediocre dart player, and it's her turn. In this context, people often reject (1):

(1)Ava is able to hit the bullseye [on her next throw].

It's obviously possible that Ava gets lucky and hits the bullseye. But ability seems to require more than mere possibility of success. A common idea, which Santorio endorses, is that ability comes with a no-luck condition, something like this:

Mental content and functional role

Propositional attitudes have an attitude type (belief, desire, etc.), and a content. A popular idea in the literature on intentionality is that attitude type is determined by functional role and content in some other way. One can find this view, for example, in Fodor (1987, 17), Dretske (1995, 83), or Loewer (2017, 716). I don't see how it could be correct.

Aggregating utility across time

Standard decision theory studies one-shot decisions, where an agent faces a single choice. Real decision problems, one might think, are more complex. To find the way out of a maze, or to win a game of chess, the agent needs to make a series of choices, each dependent on the others. Dynamic decision theory (aka sequential decision theory) studies such problems.

There are two ways to model a dynamic decision problem. On one approach, the agent realizes some utility at each stage of the problem. Think of the chess example. A chess player may get a large amount of utility at the point when she wins the game, but she plausibly also prefers some plays to others, even if they both lead to victory. Perhaps she enjoys a novel situation in move 23, or having surprised her opponent in move 38. We can model this by assuming that the agent receives some utility for each stage of the game. The total utility of a play is the sum of the utilities of its stages.

< 788 older entriesHome