Suppose we are testing statistical models of some physical process
-- a certain type of coin toss, say. One of the models in question
holds that the probability of heads on each toss is 1/2; another holds
that the probability is 1/4. We set up a long run of trials and
observe about 50 percent heads. One would hope that this confirms the
model according to which the probability of heads is 1/2 over the
alternative.
(Subjective) Bayesian confirmation theory says that some evidence E
supports some hypothesis H for some agent to the extent that the
agent's rational credence C in the hypothesis is increased by the
evidence, so that C(H/E) > C(H). We can now verify that observation of
500 heads strongly confirms that the coin is fair, as follows.
Most programming languages have conditional operators that combine a
(boolean) condition and two singular terms into a singular term. For
example, in Python the expression
'hi' if 2 < 7 else 'hello'
is a singular term whose value is the string 'hi' (because 2 < 7). In
general, the expression
x if p else y
denotes x in case p is true and otherwise y. So, for example,
Time-slice epistemology is the idea that epistemic norms are
history-independent: whether an agent at a time satisfies an epistemic
norm is always determined by the agent's state at that time,
irrespective of the agent's earlier states.
One motivation for time-slice epistemology is a kind of
internalism, the intuition that agents should not be epistemically
constrained by things that are not "accessible" at the relevant
time. Plausibly, an agent's earlier beliefs are not always accessible
in the relevant sense. If yesterday you learned that yew berries are
poisonous but since then forgot that piece of information, it seems
odd to demand that your current beliefs and actions should
nevertheless be constrained by the lost information.
Fred has bought a duplication machine at a discount from a series
in which 50 percent of all machines are broken. If Fred's machine
works, it will turn Fred into two identical copies of himself, one
emerging on the left, the other on the right. If Fred's machine is
broken, he will emerge unchanged and unduplicated either on the left
or on the right, but he can't predict where. Fred enters his machine,
briefly loses consciousness and then finds himself emerge on the
left. In fact, his machine is broken and no duplication event has
occurred, but Fred's experiences do not reveal this to him.
An evil scientist might have built a brain in vat that has all the
experiences you currently have. On the basis of your experiences, you
cannot rule out being that brain in a vat. But you can rule out
being that scientist. In fact, being that scientist is
not a skeptical scenario at all. For example, if the scientist in question
suspects that she is a scientist building a brain in a vat, then that
would not constitute a skeptical attitude.
Decision theoretic representation theorems show that one can read
off an agent's probability and utility functions from their
preferences, provided the latter satisfy certain minimal rationality
constraints. More substantive rationality constraints should therefore
translate into further constraints on preference. What do these
constraints look like?
Here are a few steps towards an answer for one particular
constraint: a simple form of the Principal Principle. The Principle
states that if cr is a rational credence function and ch=p is the
hypothesis that p is the chance function, then for any E in the domain
of p,
In On the Plurality or Worlds, Lewis argues that any account
of what possible worlds are should explain why possible worlds
represent what they represent. I am never quite sure what to make of
this point. On the one hand, I have sympathy for the response that
possible worlds are ways things might be; they are not things
that somehow need to encode or represent how things might be. On the
other hand, I can (dimly) see Lewis's point: if we have in our
ontology an entity called 'the possibility that there are talking
donkeys', surely the entity must have certain features that make it
deserve that name. In other words, there should be an answer to the
question why this particular entity X, rather than that other entity
Y, is the possibility that there are talking donkeys.
Noam Chomsky's New Horizons in the Study of Language and Mind
contains a famous passage about London.
Referring to London, we can be talking about a location or
area, people who sometimes live there, the air above it (but not too
high), buildings, institutions, etc., in various combinations (as in
'London is so unhappy, ugly, and polluted that it should be destroyed
and rebuilt 100 miles away', still being the same city). Such terms as
'London' are used to talk about the actual world, but there neither
are nor are believed to be things-in-the-world with the properties of
the intricate modes of reference that a city name
encapsulates. (p.37)
I don't know what Chomsky is trying to say here, but there is
something in the vicinity of his remark that strikes me as true and
important. The point is that the reference of 'London' is a complex
and subtle matter that is completely obscured when we say that
'London' refers to London.
Every now and then, I come across a link to a paper on academia.edu that looks interesting. I
myself don't have an account on academia.edu, and I don't want
one. This means that in order to look at the paper, I have to go
through the following process.
- I click "Download (pdf)".
- I get confronted with the message: "You must be logged in to
download". I can choose to "connect" with Facebook or Google
or create an account manually.
- I choose the third option, since I don't want academia.edu to
access my Google profile (and I don't have a Facebook account).
- Now I have to fill in a form asking for "First Name", "Last Name",
"Email" and "Password". I enter random expletives in all the fields
because I don't want an academia account, I just want to see the
bloody paper.
- After submitting that form, I get asked whether I have coauthored
a paper in a peer-reviewed journal. I choose "No", fearing that
otherwise I'll have to answer more questions about those papers.
- Next I'm asked to upload my papers. I don't want to upload any
papers, so I click "Skip this step".
- Next I have to fill in my university affiliation: "University",
"URL", "Department", "Position". I enter random expletives.
- Next comes a form where I have to enter my "Research Interests". I
enter some expletives. (Turns out my expletives are a popular research
interest, shared with 32 others.)
- Next I'm told again to "connect" with Facebook, even though I
already chose not to at the start. I click "I don't have a Facebook
account".
- Now, finally, I am presented with a link to the paper I
wanted to have a look at.
As you can imagine, I rarely go through all that hassle. Usually I
look around if I can find the paper somewhere else and give up if I
can't.
Given some evidence E and some proposition P, we can ask to what
extent E supports P, and thus to what extent an agent should believe P
if their only relevant evidence is E. The question may not always have
a precise answer, but there are both intuitive and theoretical reasons
to assume that the question is meaningful – that there is a kind
of (imprecise) "evidential probability" conferred by evidence on
propositions. That's why it makes sense to say, for example, that one
should proportion one's beliefs to one's evidence.