< 26 older entriesHome750 newer entries >

Sharing narrow content

Since narrow content is not determined by external factors, it depends much more on other propositional states than wide content. For example, if you believe that Aristotle was human whereas I believe he was a poached egg, the narrow content of all our beliefs about Aristotle will differ. When I believe that Aristotle was Alexander's teacher, you can't have a belief with exactly the same narrow content unless you also come to believe that Aristotle was a poached egg. Likewise for imaginings: When we both imagine Aristotle teaching Alexander, our imaginings cannot have the same narrow content.

Similarly, I think, if Ted believes that for any atoms there is a fusion, whereas Cian disbelieves this, they cannot share any imagining about atoms.

Restricted deducibility and deferential understanding

Dave Chalmers kindly explained his views on deducibility to me. He thinks that anything one could reasonably call non-deferential understanding of the fundamental truths would suffice for being able in principle to deduce macrophysical facts, provided that these fundamental truths, unlike my P, contain phenomenal facts and laws of nature. He also notes that I shouldn't have called these restrictions (to non-deferential understanding and the rich content of fundamental truths) assumptions, since they are really just restrictions. I'm still not sure if any kind of non-deferential understanding would suffice, but with the restrictions in place it's not as easy to come up with counterexamples as I thought.

A priority, deducibility and understanding

Back to the question of deducibility.

According to the deducibility thesis, the fundamental truths (plus indexicals, plus a 'that's all' statement) a priori entail every truth. More precisely, when P is a complete description of the fundamental truths and M any other truth, then, according to the deducibility thesis, the material conditional 'Pto M' is a priori.

Infinit analyses?

Dave Chalmers agrees that any concept can be explicitly analyzed by an infinite conjunction of application-conditionals. But he wants to restrict 'explicit analysis' to finite analyses. That certainly makes sense, but I doubt that there are any concepts for which the application-conditionals cannot be determined by finite means. For example, I think it will usually suffice to partition the epistemic possibilities into, say, 50 zillion cases and specify the extension in each of these cases. Admittedly, I can't prove that, but the fact that concepts can be learned and our cognitive capacities are limited seem suggestive.

On the very idea of non-explicit analysis

Dave Chalmers told me to read some of his papers. I have, and I'll probably say more on the deducibility problem soon. Here is just a little thought on conceptual analysis.

Chalmers suggests that we don't need explicit necessary and sufficient conditions to analyse a concept. Rather, we can analyze it just by considering its extension in hypothetical scenarios. What is it to consider a hypothetical scenario? The result seems to depends on how the scenario is presented. For example, 'the actual scenario' denotes the same scenario as 'the closest scenario to the actual one in which water is H2O'. But the difference in description could make a difference for judgements about extensions. Chalmers avoids such problems by explaining (§3.2, §3.5) that to consider a scenario is to pretend that a certain canonical description is true. Hence to analyze a concept, we evaluate material conditionals of the form 'if D then the extension of C is E', where D is a canonical description. (Are there only denumerably many epistemic possibilities or can D be infinite?) Now fix on a particular concept C and let K be the (possibly infinite) conjunction of all those 'application conditionals' (§3) that get evaluated as true. Replace every occurrence of 'C' in K by a variable x. Then 'something x is C iff K' is an explicit analysis giving necessary and sufficient conditions for being C.

There may not always be a simple, obvious, or finite explicit analysis, but at least there always is some explicit analysis. If moreover satisficing is allowed, it is very likely that we can settle with something much less than infinite.

A priori deducibility and fundamental facts

When I tried to spell out the 'modus tollens' I mentioned on monday, I came across something that may be interesting.

Frank Jackson argues that facts about water are a priori deducible from facts about H2O:

1. H2O covers most of the earth.
2. H2O is the watery stuff.
3. The watery stuff (if it exists) is water.
C. Therefore, water covers most of the earth.

1 and 2 are a posteriori physical truths, 3 is an a priori conceptual truth.

More on privacy, apriority, and two-dimensionalism

Here are, very quickly, some more thoughts on the matters I talked about here and there, inspired by another discussion with Christian.

You don't have to know much about plutonium to be a competent member of our linguistic community. One thing you have to know is that plutonium is the stuff called 'plutonium' in our community. Maybe that alone suffices. Of course, if noone knew more about plutonium than this, the meaning of 'plutonium' would be quite undetermined. To fix the meaning, it would suffice if a few persons, the 'plutonium experts', knew in addition that this element (where each of the experts points at some heap of plutonium) is plutonium.

New hope for linguistic ersatzism?

Are all truths a priori entailed by the fundamental truths upon which everything else supervenes? If 'entailed' means 'strictly implied', this is trivially true. The more interesting question is: Are all truths deducible from the fundamental truths (deducible, say, in first-order logic) with the help of a priori principles?

If yes, then it seems that Lewis' 'primitive modality' argument against linguistic ersatzism (On the Plurality of Worlds, pp.150-157) fails. Recall: Lewis argues that if you take a very impoverished worldmaking language then even though it will be feasible to specify (syntactically) what it is for a set of sentences to be maximally consistent, it will be infeasible to specify exactly when such a set represents that, e.g., there are talking donkeys. Now if all truths are a priori deducible from fundamental truths, and -- as seems plausible -- fundamental truths are specifiable in a very impoverished language, then we can simply say that a maximal set of such sentences represents that p iff p is a priori deducible from it.

Unfortunately, I find the 'primitive modality' argument quite compelling. So, by modus tollens, I have to conclude that not all truths can be a priori deducible from fundamental truths. Does anyone know whether Lewis himself believes the deducibility claim he attributes to Jackson in 'Tharp's Third Theorem' (Analysis 62/2, 2002)?

Moved

After two weaks of homelessness I've moved into my new flat today.

Everything but the beetles cancels out

This is a continuation of my last post and also partly a reply to concerns raised by my tutor Brian Weatherson.

Imagine a small community consisting of three elm experts A, B, and C.

First case: Each of A, B, and C knows enough to determine the reference of 'elm', but their reference-fixing knowledge differs. However, they belief that their different notions of 'elm' necessarily corefer. This is the case Lewis discusses in 'Naming the Colours'.

< 26 older entriesHome750 newer entries >