Functionalism and the nature of propositions
Let's assume that propositional attitudes are not metaphysically fundamental: if someone has such-and-such beliefs and desires, that is always due to other, more basic, and ultimately non-intentional facts. In terms of supervenience: once all non-intentional facts are settled, all intentional facts are settled as well.
Then how are propositional attitudes grounded in non-intentional facts? A promising approach is to identify a characteristic "functional role" of propositional attitudes and then explain facts about propositional attitudes in terms of facts about the realization of that role. (We could also identify the attitude with the realizer, or with the higher-order property of heaving a realizer, but that's optional.)
Here is a toy example. Let's say that a pair (P,V) of a probability measure P and a basic value measure V licenses choice C in a given decision problem iff C maximizes expected utility relative to P and V in that decision problem. We say that the functional role of the attitudes P,V is to cause behaviour that is licensed by (P,V). Informally, on that account, what it means to have subjective degrees of belief P and values V is to be in a state that tends to cause sensible choices by the lights of the beliefs P and values V. The account is inadequate for several reasons, but the complications we'd have to add don't really matter for the topic I want to discuss.
The topic concerns the nature of propositions -- that is, the nature of the entities in the domain of P and V. Our functionalist account imposes three main constraints on those entities.
First, (pre-)Boolean structure. Standard probability theory assumes that the domain of a probability measure has the structure of a Boolean sigma-algebra. This means that there are operations AND, OR, NOT such that, for example, the proposition A AND (B OR C) is identical to the proposition (A AND B) OR (A AND C), for any propositions A,B,C. In fact, it wouldn't really hurt if we allowed these propositions to be different, as long as they always have the same probability. Thus if we wanted, we could allow for a merely pre-Boolean algebra of propositions.
Second, plentitude. The smallest Boolean algebra has only two elements, 0 and 1, but clearly such an austere space of propositions won't suffice for our application. We need a lot more propositions. Moreover, we need not only numerical plentitude, but also qualitative plentitude: intuitively speaking, if all our propositions entail that it is raining, we will run into trouble modelling agents who aren't certain that it is raining.
This second constraint is closely related to the next one.
Third, actualization by choice. We want to say that an agent has subjective probabilities P and values V iff they are in a state that gives rise to choices licensed by P and V. In that context, choices are understood as physical (or at least, non-intentional) events, since we want to explain intentional by non-intentional facts. So part of our model must specify for each (actual and possible) choice which propositions would be true if the agent made that choice.
To illustrate, suppose we identify the propositions with sets of Lewisian worlds. If an agent faces a choice between turning left and turning right, we could then say that by turning left the agent would actualize or 'make true' the set of Lewisian worlds containing (mereologically) a left-turning counterpart of the agent. This should not be understood as a substantive hypothesis about a deep and mysterious property of actualization or truth. Rather, it is the part of the functionalist model that connects actual and possible choice behaviour to the objects in the domain of the probability and value measures. Without that part, the model would be completely useless because hypotheses about probabilities and values would never make any predictions about behaviour.
Sets of Lewisian worlds easily satisfy the first constraint, Boolean structure. Given Lewis's modal realism, one might also hope that they satisfy plentitude. I am not sure they do. I'm worried about "island universes", about reasons for extending the space of doxastic possibilities beyond the space of genuine, "metaphysical" possibilities, and about self-locating objects of belief and desire. That last problem also casts doubt on the above account of actualization: as Lewis himself suggested, it is arguably better to model propositions as sets of centred worlds; a set of centred Lewisian worlds is then actualized by a choice to turn left iff all individuals at the centre turn left at the time of the centre.
What else could we try? Could we identify propositions with, say, sentences of English? We would certainly not get Boolean structure, but one might hope that we'd get a pre-Boolean structure -- although ambiguity and vagueness arguably spoil that hope. We'd get a fair amount of plentitude, and we'd get some kind of actualization by choice: we could say that by turning left, an agent makes true 'I turn left' (or 'N turns left', where N is a name for the agent), as well as every sentence entailed by that.
It is not a problem for the sentential account that the meaning of English sentences is a high-level, intentional phenomenon, arguably grounded in the very psychological facts we're presently trying to explain. That would be a problem if we had tried to analyze believing that p as, say, being disposed to assent to some sentence which means that p. But I'm still assuming the above decision-theoretic form of functionalism. On that account, having beliefs P and values V is entirely a matter of choice dispositions, even if P is now understood as a function from English sentences to numbers. The conventional meaning of sentences is not offered as part of what makes it true that an agent has beliefs P and values V.
We could try many other candidates -- for example, "structured propositions", i.e., lists of individuals, relations, and logical operators. Actualization would here be defined so that by turning left, the agent N would make true the list (N, Turning-Left), as well as all lists entailed by that. Nonetheless, I'm skeptical that we'd get the package of actualization and plentitude right. Another interesting candidate are "primitive" propositions, that is, accounts on which propositions are not identified with anything else. Those accounts struggle especially with actualization: if propositions are unstructured blobs, how come just those propositions become "true" if the agent turns left? My own favourites are broadly combinatorial accounts where propositions are identified with regions in a high-dimensional state space.
On any approach, once we have one candidate, we have infinitely many. For example, instead of identifying the propositions with sets of Lewisian worlds, we could just as well identify them with functions from worlds to truth-value. All that would require is a minor adjustment to the definition of actualization.
Jeff King 2012 (p.12) complains about this:
The first thing I want to ask about worlds accounts is: which is it? Are propositions sets of worlds or characteristic functions of such sets? These are different things and something must be said about which are the propositions (or perhaps they both are?). So right off, worlds accounts are saddled with a Benacerraf problem.
King directs his objection to "worlds accounts", but it arguably affects any functionalist account of propositional attitudes. Our simple, decision-theoretic functionalism certainly underdetermines the domain of P and V. It imposes some constraints, but if there is anything that satisfies those constraints then there will always be many other things that satisfy them as well. I don't see how complicating the decision-theoretic account might help.
On the functionalist account of attitudes, propositions (i.e., objects of belief and desire) are not assumed to be fundamental entities, somehow missed out by fundamental physics. Nor are they nodes in the causal structure of the world. They are more like the numbers in theories of mass or temperature. We can represent an object's temperature by the number 12 (in Celsius), or by 54 (in Fahrenheit), or by the complicated set-theoretic structure that coincides with 12 on von Neumann's construction of cardinals. It would be a sign of confusion to ask which of these entities really is the value of the object's temperature. Similarly, I think it is a sign of confusion to ask, in the context of a broadly functionalist account of attitudes, which of various candidates really are the objects of an agent's beliefs.
To be fair, King raises his objection in a different context. King assumes that we should countenance a unique domain of "propositions" that are (1) denoted by English that-clauses, (2) ultimate bearers of truth-value, (3) objects of entailment, (4) objects of belief, (5) objects of assertion, and many other things. I have never seen a good explanation of why a single kind of entity should play all these roles. Worse, King's assumption arguably makes it impossible to explain propositional attitudes in terms of non-intentional facts. If believing-that-p is not a primitive property, but rather determined by lower-level physical, chemical, and biological facts -- such as facts about behavioural dispositions -- I don't see how those facts could single out a unique entity as the proposition p.
"I have never seen a good explanation of why a single kind of entity should play all these roles."
How about just general theoretical virtues? If a whole bunch of phenomena can be explained by positing a single kind of entity, then that sounds like a reason to think that that kind of entity exists.