An evil scientist might have built a brain in vat that has all the
experiences you currently have. On the basis of your experiences, you
cannot rule out being that brain in a vat. But you can rule out
being that scientist. In fact, being that scientist is
not a skeptical scenario at all. For example, if the scientist in question
suspects that she is a scientist building a brain in a vat, then that
would not constitute a skeptical attitude.
Decision theoretic representation theorems show that one can read
off an agent's probability and utility functions from their
preferences, provided the latter satisfy certain minimal rationality
constraints. More substantive rationality constraints should therefore
translate into further constraints on preference. What do these
constraints look like?
Here are a few steps towards an answer for one particular
constraint: a simple form of the Principal Principle. The Principle
states that if cr is a rational credence function and ch=p is the
hypothesis that p is the chance function, then for any E in the domain
of p,
In On the Plurality or Worlds, Lewis argues that any account
of what possible worlds are should explain why possible worlds
represent what they represent. I am never quite sure what to make of
this point. On the one hand, I have sympathy for the response that
possible worlds are ways things might be; they are not things
that somehow need to encode or represent how things might be. On the
other hand, I can (dimly) see Lewis's point: if we have in our
ontology an entity called 'the possibility that there are talking
donkeys', surely the entity must have certain features that make it
deserve that name. In other words, there should be an answer to the
question why this particular entity X, rather than that other entity
Y, is the possibility that there are talking donkeys.
Noam Chomsky's New Horizons in the Study of Language and Mind
contains a famous passage about London.
Referring to London, we can be talking about a location or
area, people who sometimes live there, the air above it (but not too
high), buildings, institutions, etc., in various combinations (as in
'London is so unhappy, ugly, and polluted that it should be destroyed
and rebuilt 100 miles away', still being the same city). Such terms as
'London' are used to talk about the actual world, but there neither
are nor are believed to be things-in-the-world with the properties of
the intricate modes of reference that a city name
encapsulates. (p.37)
I don't know what Chomsky is trying to say here, but there is
something in the vicinity of his remark that strikes me as true and
important. The point is that the reference of 'London' is a complex
and subtle matter that is completely obscured when we say that
'London' refers to London.
Every now and then, I come across a link to a paper on academia.edu that looks interesting. I
myself don't have an account on academia.edu, and I don't want
one. This means that in order to look at the paper, I have to go
through the following process.
- I click "Download (pdf)".
- I get confronted with the message: "You must be logged in to
download". I can choose to "connect" with Facebook or Google
or create an account manually.
- I choose the third option, since I don't want academia.edu to
access my Google profile (and I don't have a Facebook account).
- Now I have to fill in a form asking for "First Name", "Last Name",
"Email" and "Password". I enter random expletives in all the fields
because I don't want an academia account, I just want to see the
bloody paper.
- After submitting that form, I get asked whether I have coauthored
a paper in a peer-reviewed journal. I choose "No", fearing that
otherwise I'll have to answer more questions about those papers.
- Next I'm asked to upload my papers. I don't want to upload any
papers, so I click "Skip this step".
- Next I have to fill in my university affiliation: "University",
"URL", "Department", "Position". I enter random expletives.
- Next comes a form where I have to enter my "Research Interests". I
enter some expletives. (Turns out my expletives are a popular research
interest, shared with 32 others.)
- Next I'm told again to "connect" with Facebook, even though I
already chose not to at the start. I click "I don't have a Facebook
account".
- Now, finally, I am presented with a link to the paper I
wanted to have a look at.
As you can imagine, I rarely go through all that hassle. Usually I
look around if I can find the paper somewhere else and give up if I
can't.
Given some evidence E and some proposition P, we can ask to what
extent E supports P, and thus to what extent an agent should believe P
if their only relevant evidence is E. The question may not always have
a precise answer, but there are both intuitive and theoretical reasons
to assume that the question is meaningful – that there is a kind
of (imprecise) "evidential probability" conferred by evidence on
propositions. That's why it makes sense to say, for example, that one
should proportion one's beliefs to one's evidence.
In 2008, I wrote a post on Stalnaker on self-location,
in which I attributed a certain position to Stalnaker and raised some
objections. But the position isn't actually Stalnaker's. (It might be
closer to Chisholm's). So here is another attempt at figuring out
Stalnaker's view. (I'm mostly drawing on chapter 3 of Our Knowledge
of the internal world (2008), chapter 5 of Context (2014),
and a forthcoming paper called "Modeling a perspective on the world"
(2015).)
In "Ramseyan
Humility", Lewis argues for a thesis he calls "Humility". He never
quite says what that thesis is, but its core seems to be the claim
that our evidence can never rule out worlds that differ from actuality
merely by swapping around fundamental properties. Lewis's argument, on
pp.205-207, is perhaps the most puzzling argument he ever gave.
Lewis begins with some terminology.
In The Logic of Decision, Richard Jeffrey pointed out that
the desirability (or "news value") of a proposition can be usefully
understood as a weighted average of the desirability of different ways
in which the proposition can be true, weighted by their respective
probability. That is, if A and B are incompatible propositions,
then
(1) Des(AvB) = Des(A)P(A/AvB) + Des(B)P(B/AvB).
So desirabilities are affected by probabilities. If you prefer A
over B and just found out that conditional on their disjunction, A is
more likely then B, then the desirability of the disjunction goes
up. That seems right.
Superficially, modal auxiliaries such as 'must', 'may', 'might', or
'can' seem to be predicate operators. So it is tempting to interpret
them as functions from properties to properties: just as 'Alice jumps'
attributes to Alice the property of jumping, 'Alice can jump'
attributes to her the property of being able to jump, 'Alice may jump'
attributes the property of being allowed to jump, and so on.
Perhaps the biggest obstacle to this approach comes from quantified
constructions. If 'Alice may jump' attributes to Alice the property of
being allowed to jump, then 'one of us may jump' should say that one
of us has the property of being allowed to jump. But while this is one
possible reading of the sentence, 'one of us may jump' also has a
reading on which it states that it is permissible that one of us
jumps. There is a kind of de re/de dicto ambiguity here, which
suggests that 'may' can not only apply to properties but also to
propositions.