Access, safety, and sensitivity

A common worry about mathematical platonism is how we could know about an independent realm of mathematical facts. The same kind of worry arises for moral realism: if there are irreducible moral facts, how could we have access to them?

Benacerraf (1973) put the problem in terms of causation. Knowledge of maths, he suggested, would require some kind of causal connection between the mathematical facts and our mathematical beliefs, but modern platonists typically don't believe in such a connection.

Causal accounts of knowledge have fallen out of favour. Nowadays, the problem is more commonly put in terms of safety or sensitivity (see, for example, Clarke-Doane (2020, ch.5)). Sensitivity requires that if the relevant (moral or mathematical) facts had been different then our beliefs would have been different. Safety requires that our belief-forming methods could not easily have produced false beliefs.

I don't think safety or sensitivity, or even a causal connection, would be sufficient to solve the access problem. Nor that any of them is necessary to solve it.

Let's assume there are irreducible moral or mathematical facts. It follows that one can coherently imagine different ways the moral or mathematical realm might be. I assume that one can also coherently assign different probabilities to these possibilities.

In the absence of any relevant evidence, an agent might start out with an open-minded prior. Let's say you're a rational agent who initially gives equal prior probability to P and ¬P, where P is some non-trivial moral or mathematical proposition. Then you engage in moral or mathematical inquiry. The problem is to explain how this could make you rationally confident in P.

What might happen in the course of your inquiry? You might find that P is entailed by other propositions that you believe. Then your initial state was probabilistically incoherent. I've stipulated that it was not.

Alternatively, you might find that believing P is part of the best "reflective equilibrium" based on your initial belief state. This, too, is hard to square with your initial coherence. Coherence implies that your beliefs are all consistent with each other. There's no opportunity for resolving inconsistencies.

You might find that you have an inclination to judge that P is true. You might treat this inclination as evidence for P. You might also treat it as the starting point for a different process of reflective equilibrium that begins not just with your prior belief state, but with that state combined with your "intuitions" or inclinations to judge. This process is rational only if your intuition that P can be treated as evidence for P. So let's focus on this.

Let E be the proposition that you have an intuition that P. E is evidence for P, relative to your prior beliefs, iff the prior probability of E given P is greater than the prior probability of E given ¬P. This is (I think) what the access problem boils down to: You can rationally come to believe moral or mathematical facts iff your rational prior links those facts with ordinary facts that you may observe.

The relevant ordinary facts may be facts about your own intuitions, or they may be mundane facts about other events in your environment. If, for example, you assign low prior probability to the hypothesis that people in your community engage in acts that are morally wrong, then observing them provides evidence about the moral facts (as McGrath (2019, ch.4) points out).

What we need is a conception of rationality that allows agents to have priors linking the observed facts E with the moral or mathematical hypothesis P. A causal connection between our beliefs and the moral or mathematical facts is not necessary. Nor would it help.

Suppose there were such a causal connection. Suppose, let's say, that whenever we contemplate a moral or mathematical question then (unbeknownst to us) we are directly caused to have a belief in the true answer. Our beliefs would be safe and sensitive. But they would not be rational. Nor would they be justified. Nor would they be knowledge. We would have no reason to trust them.

What if the causal link was more indirect and ordinary? Suppose the moral or mathematical facts directly cause an "intuition", which in turn causes a belief. It doesn't matter. The resulting belief is rational iff the rational prior probability of the intuition given the relevant facts is greater than the prior probability of the intuition given the negation of those facts. If it is not, we will have a belief "based on" evidence that we ourselves (rationally) regard as irrelevant to the belief's truth. The belief won't be rational and it won't amount to knowledge.

Benacerraf, Paul. 1973. “Mathematical Truth.” Journal of Philosophy 70: 661–80.
Clarke-Doane, Justin. 2020. Morality and Mathematics. Oxford University Press.
McGrath, Sarah. 2019. Moral Knowledge. Oxford University Press.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.