< 361 older entriesHome414 newer entries >

From Chance to Credence

Lewis argues that any theory of chance must explain the Principal Principle, which says that if you know that the objective chance for a certain proposition is x, then you should give that proposition a credence close to x. Anyone who proposes to reduce chance to some feature X, say primitive propensities, must explain why knowledge of X constrains rational expectations in this particular way.

How does Lewis's own theory explain that?

On Lewis's theory, the chance of an event (or proposition) is the probability-value assigned to the event by the best theory. Those 'probability-values' are just numerical values: they are not hypothetical values for some fundamental property; they need not even deserve the name "probability". However, one requirement for good theories is that they assign high probability-values to true propositions. Other requirements for good theories are simplicity and strength. The best theory is the one that strikes the best compromise between all three requirements. So the question becomes: why should information that the best theory assigns probability-value x to a proposition constrain rational expectations in the way the Principal Principle says?

Knowledge of laws and knowledge of naturalness

Some accounts of laws of nature make it mysterious how we can empirically discover that something is a law.

The accounts I have in mind agree that if P is (or expresses) a law of nature, then P is true, but not conversely: not all truths are laws of nature. Something X distinguishes the laws from other truths; P is a law of nature iff P is both true and X. The accounts disagree about what to put in for X.

Many laws are general, and thus face the problem of induction. Limited empirical evidence can never prove that an unlimited generalalization is true. But Bayesian confirmation theory tells us how and why observing evidence can at least raise the generalization's (ideal subjective) probability. The problem is that for any generalization there are infinitely many incompatible alternatives equally confirmed by any finite amount of evidence: whatever confirms "all emeralds are green" also confirms "all emeralds are grue"; for any finite number of points there are infinitely many curves fitting them all, etc. When we do science, we assign low prior probability to gerrymandered laws. We believe that our world obeys regularities that appear simple to us, that are simple to state in our language (including our mathematical language). Let's call those regularities "apparently simple", and the assumption that our world obeys apparently simple regularits "the induction assumption".

Time Traveler Convention

A time traveler convention will be held at MIT on May 7. Apparently the organizers have in mind a branching universe model of time travel, otherwise this makes no sense:

Can't the time travelers just hear about it from the attendees, and travel back in time to attend?

Yes, they can! In fact, we think this will happen, and the small number of adventurous time travelers who do attend will go back to their "home times" and tell all their friends to come, causing the convention to become a Woodstock-like event that defines humanity forever.

Anyway, suppose no time travelers from the future show up at the convention. Does that decrease your credence in the physical possibility of time travel? If so, would your credence decrease by the same amount if the convention was (now) set to take place in the past, say on May 7, 2004? After all, there's little point announcing a time traveler conference in due time before the event.

Why we need more intensions

Suppose we want a theory that tells us for all sentences in our language in what possible contexts their utterance is true. Call those functions from contexts to truth values "A-intensions". A systematic theory should tell us how the A-intension of complex sentences depend on their constituents. Here are some theories which are not very satisfactory in this respect.

Theory 1. Each sentence consists of a sentence radical and a fullstop. (The sentence-radical is the entire sentence without the fullstop.) All sentence radicals have the same semantic value: God. The semantic value of the fullstop maps this semantic value to a truth-value. But whether it maps God to true or false depends on the context of utterance. For instance, in a context in which it doesn't rain and the utterance of "." is preceeded by an utterance of "it rains", the value of "." maps God to false; in a context where "." is preceeded by "2+2=4", it maps God to true; and so on.

Positions on intrinsic properties and causal/nomic roles

Warning: another pointless exercise in conceptual geography.


Can intrinsic properties have their causal/nomic role essentially? It seems not. Suppose something x is P. If P essentially occupies a certain causal role, say being such that all its instances attract one another, we can infer from x's being P that either there are no other P-things in x's surrounding or x and the other things will (ceteris paribus) move towards one another. But if we can infer from x's being P what happens in x's surrounding, P cannot be intrinsic. Being intrinsic means belonging to things independently of what goes on in their neighbourhood.

CD Cases and Xmodmap

Two unrelated notes that will not interest any of my readers.

First, my LaTeX Paper CD Case Generator can now be run on the server. The Redhat LaTeX packages seem to be rather old, so new stuff doesn't work well. Maybe I'll manually install a newer tetex version sometime.

Second, I've installed the Hoary Hedgehog on the little Powerbook. The only problem, as usual, was my keyboard layout. (My keyboard is German.) Here is the .Xmodmap file I wrote to fix this, just in case anyone -- my future self, in particular -- runs into the same problem.

Suppose ZFC proves its own inconsistency

Suppose we find a proof, in ZFC, that ZFC is inconsistent. Does it follow that ZFC is inconsistent?

On the one hand, if we could infer from ZFC $m[1] ~Con(ZFC) that ZFC is inconsistent, we could contrapositively infer the consistency of ZFC & Con(ZFC) from Con(ZFC); and since ZFC & Con(ZFC) obviously entails Con(ZFC), ZFC & Con(ZFC) would thereby entail its own consistency. Which it only can if it is inconsistent (Gödel's second incompleteness theorem). So it seems that we can only infer that ZFC is inconsistent from the observation that ZFC entails its own inconsistency if we presupposes that ZFC & Con(ZFC) is inconsistent.

Some Thoughts on Fundamental Structural Properties

Fundamental (or 'perfectly natural') properties are properties on whose distribution in a world all qualitative truths about that world supervene. That is, whenever two worlds are not perfect qualitative duplicates, they differ in the distribution of fundamental properties.

This is not the only job discription for fundamental properties. If it were, far too many classes of properties could play that role. For instance, all qualtiative truths trivially supervene on the distribution of all properties, or on the distribution of all intrinisic properties, or (for what it's worth) on the distribution of all extrinsic properties. (That's because no two things, whether duplicates or not, ever agree in all extrinsic properties.)

Structures all the way down

A structural property is a property that belongs to things in virtue of their constituents' properties and interrelations. For instance, the property being a methane molecule necessarily belongs to all and only things consisting of suitably connected carbon and hydrogen atoms.

There is two-way dependence: Necessarily, if something instantiates a structural property, then it has proper parts that instantiate certain other properties; conversely, if the proper parts of a thing instantiate those other properties then, necessarily, the thing itself instantiates the structural property.

Mistaken Intuitions

Some people intuit that

  • the subject in a Gettier case has knowledge;
  • Saul Kripke has his parents essentially;
  • "Necessarily, P and Q" entails "Necessarily, P";
  • whenever all Fs are Gs and all Gs are Fs, the set of Fs equals the set of Gs;
  • the liar sentence is both true and not true;
  • the conditional probability P(A|B) is the probability of the conditional "if B then A";
  • it is rational to open only one box in Newcomb's problem;
  • switching the door makes no difference in the Monty Hall problem;
  • propositions are not classes;
  • people are not swarms of little particles;
  • a closed box containing a duck weighs less when the duck inside the box flies;
  • spacetime is Euclidean;
  • there is a God constantly interfering with our world.

They are wrong. All that is false.

< 361 older entriesHome414 newer entries >