How much can you say about the world in purely logical terms? In
first-order logic with identity, one can construct formulas like
'(Ex)(Ey)~(x=y)'. But arguably, this doesn't yet mean anything. As we
learned in intro logic, formulas of first-order logic have no fixed
interpretation; they mean something only once we provide a domain of
quantification and an assignment of values to predicate and function
symbols. As it happens, '(Ex)(Ey)~(x=y)' doesn't contain any
non-logical predicate and function symbols, so to make it mean
anything we just need to specify a domain of quantification. For
example, if the domain is the class of Western black rhinos, then the
formula says that there are at least two Western black rhinos.
You can't predict the stock market by looking at tea leaves. If an
episode of looking at tea leaves makes you believe that the stock
market will soon collapse, then -- assuming your previous beliefs did
not support the collapse hypothesis, nor the hypothesis that tea
leaves predict the stock market -- your new belief is unjustified and
irrational. So there are epistemic norms for how one's opinions may
change through perceptual experience.
Such norms are easily accounted for in the traditional Bayesian
picture where each perceptual experience is associated with an
evidence proposition E on which any rational agent should condition
when they have the experience. But what if perceptual experiences
don't confer absolute certainty on anything? Jeffrey pointed out that
if there is a partition of propositions { E_i } = E_1,...,E_n such
that (1) an experience changes their probabilities to some values {
p_i } = p_1,...,p_n, and (2) the experience does not affect the
probabilities conditional on any member of the partition, then the new
probability assigned to any proposition A is the weighted average of
the old probability conditional on the members of the partition,
weighted by the new probability of that partition. This rule is often
called "Jeffrey conditioning" and sometimes "generalised
conditioning", but unlike standard conditioning it isn't a dynamical
rule at all: it is a simple consequence of the probability
calculus. To get genuine epistemic norms on the dynamics of belief
through perceptual experience, Jeffrey's rule must be supplemented
with a story about how a given experience, perhaps together with an
agent's previous belief state, may fix the partition { E_i } and
values { p_i } that determine a Jeffrey update. This is the "input
problem" for Jeffrey conditioning.
Suppose a rational agent makes an observation, which changes the
subjective probability she assigns to a hypothesis H. In this case,
the new probability of H is usually sensitive to both the observation
and the prior probability. Can we factor our the prior probability to
get a measure of how the experience bears on the probability of H,
independently of the prior probability?
A common answer, going back to Alan Turing and I.J.Good, is to use
Bayes factors. The Bayes factor B(H) for H is the ratio
(P'(H)/P'(not-H))/(P(H)/P(not-H)) of new odds on H to old odds. Thus
the new odds on H are the old odds multiplied by the Bayes factor. For
example, if the prior credence in H was 0.25 and the posterior is 0.5,
then the odds on H changed from 1:3 to 1:1, and so the Bayes factor of
the update is 3. The same Bayes factor would characterise an update
from probability 0.01 to about 0.03 (odds 1:99 to 1:33) or from 0.9 to
about 0.96 (odds 9:1 to 27:1).
Dilip Ninan has also argued on a number of occasions that attitude
contents cannot in general be modelled by sets of qualitative centred
worlds; see especially his "Counterfactual
attitudes and multi-centered worlds" (2012). The argument is
based on an alleged problem for the centred-worlds account applied to what he
calls "counterfactual attitudes", the prime example being imagination.
Since the problem concerns the analysis of attitudes de re,
we first have to briefly review what the centred-worlds account might
say about this. Consider a de re belief report "x believes that y is
F". Whether this is true depends on what x believes about y, but if
belief contents are qualitative, we cannot simply check whether y is F
in x's belief worlds. We first have to locate y in these
qualitative scenarios. A standard idea, going back to Quine, Kaplan
and Lewis, is that the belief report is true iff there is some
"acquaintance relation" Q such that (i) x is Q-related uniquely to y
and (ii) in x's belief worlds, the individual at the centre is
Q-related to an individual that is F. For example, if Ralph sees
Ortcutt sneaking around the waterfront, and believes that the guy
sneaking around the waterfront is a spy, then Ralph believes de re of
Ortcutt that he is a spy.
If we want to model rational degrees of belief as probabilities,
the objects of belief should form a Boolean algebra. Let's call the
elements of this algebra propositions and its atoms (or
ultrafilters) worlds. Every proposition can be represented as a
set of worlds. But what are these worlds? For many applications, they
can't be qualitative possibilities about the universe as a whole, since
this would not allow us to model de se beliefs. A popular
response is to identify the worlds with triples of a possible universe,
a time and an individual. I prefer to say that they are maximally
specific properties, or ways a thing might be. David Chalmers (in
discussion, and in various papers, e.g. here and there) objects that
these accounts are not fine-grained enough, as revealed by David
Austin's "two tubes" scenario. Let's see.
Luc Bovens and Wlodek Rabinowicz (2010
and 2011)
present the following puzzle:
Three people are each given a hat to put on in the
dark. The hats' colours, either black or white, has been decided by
three independent tosses of a fair coin. Then the light goes on and
everyone can see the hats of the two others, but not their own. All of
this is common knowledge in the group.
Let's call the three players X, Y and Z. There are eight possible
distributions of hat colours, each with probability 1/8:
I had to move to a new server, hence the recent downtime. If you notice something that's broken, please let me know.
Allen Hazen (1979, pp.328-330)
pointed out a problem for Lewis's counterpart-theoretic interpretation
of modal discourse: the fact that x is essentially R-related to y
should be compatible with the fact that both x and y have multiple
counterparts at some world, without all counterparts of x being
R-related to all counterparts of y. But the latter is what Lewis's
semantics requires for the truth of `necessarily xRy'.
I'll begin with a strange consequence of the best system
account. Imagine that the basic laws of quantum physics are
stochastic: for each state of the universe, the laws assign
probabilities to possible future states. What do these probability
statements mean?
The best system account identifies chance with the probability
function that figures in whatever fundamental physical theory best
combines the virtues of simplicity, strength and fit, where fit is a
matter of assigning high probability to actual events. So when we say
that the chance of some radium atom decaying within the next 1600
years is 1/2, what we claim is true iff whatever fundamental theory
best combines the virtues of simplicity, strength and fit assigns
probability 1/2 to the mentioned outcome. As a piece of ordinary
language philosophy, this is not very plausible. For one thing, people
speak of chances even when it is assumed that the fundamental dynamics
is deterministic. Moreover, by ordinary usage, chances are logically
independent of actual frequencies, which is incompatible with the best
system account. Nevertheless, the account may be plausible as a
somewhat revisionary explication of one strand in the mess that is our
ordinary conception of chance.
It is well-known that humans don't conform to the model of rational
choice theory, as standardly conceived in economics. For example, the
minimal price at which people are willing to sell a good is often much
higher than the maximal price at which they would previously have been
willing to buy it. According to rational choice theory, the two prices
should coincide, since the outcome of selling the good is the same as
that of not buying it in the first place. What we philosophers call
'decision theory' (the kind of theory you find in Jeffrey's Logic
of Decision or Joyce's Foundations of Causal Decision
Theory) makes no such prediction. It does not assume that the
value of an act in a given state of the world is a simple function of
the agent's wealth after carrying out the act. Among other things, the
value of an act can depend on historical aspects of the relevant
state. A state in which you are giving up a good is not at all
the same as a state in which you aren't buying it in the first place,
and decision theory does not tell you that you must assign equal
value to the two results.