Another nice
problem from Brian Weatherson's weblog: Farrington is 50% confident
that it's after 4:30, and 50% confident that a certain coin
landed tails. Now he comes to know that iff the coin landed tails, some
researchers create a brain-in-a-vat duplicate of himself at exactly 4:30
today. What are the probabilities he should assign to the 5 open
possibilities:
This appears to be a problem for Lewis' theories of causation:
Let A,B,C,D be any events such that B depends counterfactually on A, and D
on C. Now consider the conjunction (fusion) B+C of B and C. If A had not
occurred, B+C would not have occurred. For then B would not have occurred,
and presumably B+C can't happen without B. And if B+C had not occurred, C
would not have occured either, so (unless the absence of B has some
surprising effects on D), D would not have occurred. Hence there is a
chain of counterfactual dependence between A and D. But since A,B,C,D were
arbitrary, this means that every cause causes every effect.
Today I found Montague's paper, and it turns out that I was
wrong. Well, Field's presentation was not entirely correct: We
shouldn't take Robinson arithmetic itself as R, but some extension of it
that contains an additional primitive predicate "True" (T, for short). The extension need
not say anything about this predicate. This is why T needn't represent
truth in R. (If R says nothing about T, T either represents nothing at
all or the inconsistent property, depending on how precisely we define
representation.) Montague then shows, very much like Field, that any
theory that contains R -- no matter if it's axiomatizable or not --,
as well as every instance of
So I've started to actually read Field's papers. Unfortunately I already
got stuck on page 4 of "The Semantic Paradoxes and the Paradoxes
of Truth". Field there discusses the following restriction of the
naive truth schema:
T**) If True(p) then p.
He notes that this is rather weak, since it doesn't even imply that there
are any truths at all. Hence, he says, one would presumably add principles like
I've been busy working on the logic book, playing with music software,
meeting friends, lazing around, looking after Magdalena (who was ill again), protesting
against the war, and thinking that it was a good
idea to have voted for Livingstone (as I did when I lived in London).
I hope to get back to philosophy soon.
I'm thinking about how to introduce the semantics of predicate logic to
beginning philosophy students. In particular, I'm interested in the
interpretation of predicates and quantifiers. Last year in logic class, it
seemed that most students were rather unhappy with the formal recursion on
truth we were teaching them.
So I've just picked 15 random logic textbooks to see how they are doing
it.
Group 1 (functions and sets): Interpretations are introduced as
entities that assign to each n-ary predicate symbol a class of n-tuples
of elements of the domain. (Machover, Beckermann, Bostock, Newton-Smith,
Mendelson, Kutschera, Allen/Hand, Bühler)
Hereby I stipulate that "fb13" is to denote the first human born in the
13th century. Hence it might seems that "fb13 was born in the 13th
century" is analytically true, true by definition. But if analytic truths
are closed under logical implication, "somebody was born in the 13th
century" would also be analytically true. Which it is not.
I don't think tinkering with closure under logical implication will help.
Hereby I stipulate that "fb23" is to denote the first human born in the
23rd century. However, if recent progress in civilization continues, there
might well be no humans in the 23rd century. And if no humans are born in
the 23rd century, "fb23 is a human born in the 23rd century" is false. So
it cannot be true by definition.
In my last posting, I argued that to escape the cardinality problem
for thoughts Frege perhaps has to give up
1) For any things there is at least one concept under which all and only
those things fall.
Now (1) is clearly false if, as I think, all there is are objects --
that is, if it makes sense to quantify over absolutely everything. But if
not, as Frege thinks, denying (1) is not an option. A concept is a
function from things to truth values. Given that functions are not
themselves things, how could there fail to be such functions?
A while ago, I was discussing Adam Rieger's alleged paradox in Frege's
ontology (here, here, and here). I'm still confident that the Russellian
version of the paradox can be blocked. But on second thought, the
cardinality version of the paradox appears to be much more difficult. Here
it is again.
1) For any things there is at least one concept under which all and only
those things fall.
2) For each of these concepts, there exists the thought that Ben Lomond
falls under it.
3) All these thoughts are different.
4) All thoughts are objects.
From (1)-(3) it follows that there are more thoughts than objects (2^k
if k is the number of objects), contradicting (4).
Ulrich Blau is professor of logic in Munich. For the last 30 years or so he's been working on an enormous book in which he solves all known and several unknown problems in logic, foundations of mathematics, and philosophy in general. If you ever come across that book (it's not published yet), I'd strongly recommend you just skip the non-technical introduction (and conclusion). It's really getting much better where the formulas begin. Anyway, in the introductory chapter I found a silly question that I once discovered myself when I still went to school:
Does this question have an answer?
(In Blau's version, it goes "Can you answer the question you are now reading either affirmatively or negatively?")