The OPP feed has been a bit bumpy recently because I made a few changes to the code. Things should be running smoothly again from today. The biggest changes are a bit of OCR to parse scanned documents (about 25% of all PDFs) and an improved algorithm to detect authors and co-authors. If you have contacted me about anything else: that has also been fixed.
I've finally finished rewriting my paper on self-locating belief dynamics. Here is the new version. The presentation is quite different from before. In particular, I no longer use transition probabilities in modeling Cartesian agents, which allows me to be more specific about what these probabilities are. There's also a new section where I try to show that traditional arguments in support of conditioning turn into arguments for my revised rule when self-locating beliefs are considered.
Speaking of chapter six, Williamson here argues that the sentence
1) if an animal escaped from the zoo, it would be a monkey
is not adequately formalized as
1')
on the grounds that according to (1'), even the elephants are such that they would be monkeys if they escaped from the zoo. Williamson suggests that an adequate formalization might rather go like this:
Okay. Here are some thoughts on a talk Frank Jackson gave last week on Williamson on thought experiments.
The question is what Gettier discovered in his famous article. According to Frank, he revealed a fact about our concept 'knowledge': that it is not the same as our concept of justified true belief. According to Williamson, Gettier has revealed a fact about knowledge itself: that it is not justified true belief. A discovery merely about our concepts, Williamson says, "would show little of philosophical interest"; it would be "of significance primarily to theorists of concepts, not to epistemologists". For "the primary concern of epistemology is with the nature of knowledge, not with the nature of our concept of knowledge". (All of these are from p.206 of The Philosophy of Philosophy.) Frank disagrees. He thinks that results about the key concepts of a discipline are quite important to that discipline.
Long ago, I wrote a little script to automatize Brian Weatherson's (at the time) Online Papers in Philosophy blog. The script crawls the home pages of various philosophers and extracts author, title and abstract from every paper posted there. It then visits the pages again every other day or so to check for updates. This way, I'm currently tracking about 15000 papers from about 2000 pages.
Since the real OPP blog, maintained by Jonathan Ichikawa for the last few years, has caught a virus and is therefore not doing well right now, I've decided to dust off my script and make it available to the public. Then along came David Chalmers, who talked me into not making it public after all, but rather merging it into something even bigger that will hopefully go live very soonish. In the meantime, here is at least an RSS feed of my script, with daily updates of new papers it finds: OPP RSS.
A time traveler offers you a game. You can toss a fair coin. If it lands heads, you win $2; if it lands tails, you lose $1. The time traveler informs you that all fair coins tossed today will land tails. (He knows, because he's seen all the results before traveling back in time.) Do you play?
Suppose you decide to toss. Trusting the time traveler, you can then be confident that you will lose $1. You would not have lost anything if you hadn't tossed, so the alternative option would have been better. It seems that you've made the wrong decision.
Hey there. I've been a bit busy moving house, sitting in the garden, watching the falling leaves, etc. I've also thought some more about the absentminded driver. Here's something odd: on a certain interpretation of this case, we get a an unstable decision problem that remains interestingly unstable even when mixing (randomization) is allowed.
Some background. A decision problem is unstable if a decision to do one thing
inevitably makes another thing preferable. In a classic example, Death, who is very good at predicting people's whereabouts, has predicted where you will be tomorrow and awaits you there. Should you stay where you are (in Damascus) or flee to Aleppo?
A curious aspect of the Sleeping Beauty debate is the role of Dutch Books. At first sight, it looks as if Dutch Book considerations support thirding (see e.g. Hitchcock 2004). However, as Halpern 2006 shows, Beauty can also be Dutch Booked if she is a thirder. Some have argued that these arguments might fail because in Sleeping Beauty type cases, credences and betting odds can come apart (see e.g. Bradley and Leitgeb 2006). I disagree. Instead, I will argue that her vulnerability to Dutch Books doesn't show that Beauty is irrational -- at least not if she is a halfer.
Bas van Fraassen's Reflection Principle says that your current beliefs should be in line with your current beliefs about your future beliefs. More precisely,
PRB: P_1(A | P_2(A)=x) = x.
P_1 is your credence at time 1, P_2 your credence at time 2. PRB says that conditional on the assumption that at time 2 you believe A to degree x, you should already believe A to degree x at time 1. For agents who believe that they will (or might) change their beliefs in irrational ways between the two times, PRB is not a reasonable demand: if you know that you will be hit on the head tomorrow and consequently believe that the Earth is flat, you shouldn't believe that the Earth is flat now. On the other hand, if you're certain you will not change your beliefs in any such irrational way between now and tomorrow, then PRB is reasonable: suppose tomorrow you will believe that the Earth is flat by rationally responding to some very surprising new information; then you can infer that there exists some such information strongly supporting that the Earth is flat. But the fact that there is evidence for P is of course itself evidence for P. Hence you should already believe today that the Earth is probably flat.
I finally found the decision theory puzzle that I posted recently in a series of papers by Reed Richter from the mid 1980s. I'm not convinced by Richter's treatment though, and I'm still somewhat puzzled.
Here is Richter's version:
Button: You and another person, X, are put in separate rooms where each of you faces a button. If you both push the button within the next 10 minutes, you will (both) receive 10 Euros. If neither of you pushes the button, you (both) lose 1000 Euros. If one of you pushes and the other one doesn't, you (both) get 100 Euros.
What would you do? Most people, I guess, would push the button. After all, if you don't push it, there is a high risk of losing 1000 Euros. For how how can you be certain that X won't do the same? On the other hand, if you push the button, the worst possible outcome is a gain of 10 Euros.