Against abstract identifications

Some philosophers believe that the second world war is a triple of a thing, a property and a time. Others have argued that my age is a pair of an equivalence class of possible individuals and a total ordering on such classes. It is also often assumed the number 2 is the set {{{}},{}}; that the meaning of "red" is a function from contexts to functions from possible individuals to functions from possible worlds to truth values; that possible worlds are sets of ... sets of properties; and that truth values are the numbers 0 and 1 (aka the sets {} and {{}}).

These kinds of identifications go against common sense and are to a certain degree arbitrary. The two features are related, I think.

The arbitrariness is due to the fact that in each case, many other candidates would have served equally well: there's no good reason to prefer 0 and 1 as truth values over 2 and 3, or for preferring the von Neumann numbers over the Zermelo numbers, or for prefering triples of things, properties and times over unit sets of such triples. Some of the candidates are formally easier to handle than others, but I don't think this carries much weight: it is no part of the job description for events or ages that they have a simple set-theoretic structure.

If several candidates satisfy a job description equally well, the corresponding term is best regarded as indeterminate between those candidates. This means that there are no true statement identifying events, ages, numbers, meanings, worlds or truth values with anything else. If all candidates are sets, at best we can say that events etc. are sets of some kind or other. But often or maybe always, there are equally good non-set candidates: instead of a triple, why not take the mereological fusion of the triple and the empty set (which isn't a set even on Lewis' mereological theory of sets)? Instead of the von Neumann series, why not take that series with 7 replaced by Julius Caesar? (I'd also like to believe that sets themselves can be reduced to other things, so identification shouldn't stop there.)

In philosophical practice, it is awkward to always keep in mind that abstract roles have many realisers. It is easier, and mostly harmless, to choose one particular realiser -- preferably a simple one -- and pretend that it is the unique one. This is harmless as long as the conclusions we draw don't depend on idiosyncratic facts about the chosen realiser. Unfortunately, this happens every now and then. This is especially bad if the chosen candidate isn't a good realiser in the first place. For instance, some philosophers have argued that logically equivalent propositions are identical because propositions are sets of possibilities; or that giving minimal credence (credence 'zero') isn't the same as ruling out for certain because credences are real numbers (and thus behave in a certain way when there are infinitely many possibilities to consider).

Comments

# on 04 February 2007, 11:52

What about reduction and explanantion? If you do not know what what "red" does or are puzzled what numbers are, the above statements might help you or might show you how to connect this to other theories you have. E.g. Quine I think would take identification to be reduction and he wouldn't be too troubled about that there is not a single way to reduction. Or are you more troubled about this being "abstract"?

M.

# on 17 February 2007, 11:23

Is it at all arbitrary that 2 is (in some people's eyes) {{}, {{}}}, when those curly brackets indicate ZFC sets, or that collections are (in some people's eyes) ZFC sets? Mathematicians find reasons for each choice that leads to such identifications. The end product may go against common sense; but so does the distinction between numbers and numerals, not to mention how some people regard numbers as intrinsically coloured, whence had there happened to be more of them (e.g. because the gene for that was a fitter gene in other respects) that would be common sense!

# on 18 February 2007, 06:23

Although it's tempting to be irked by strange ontological postulations like these, I think it's important to look at them in the context of the theories as a part of which they are postulated.

A semantic theory, for example, may postulate any number of abstract entities (models, propositions, etc.), but only as a means to the end of accurately predicting competent speakers' intuitions about the meanings of the sentences of the object language. If one offers better empirical predictions than another, then we have more reason to accept the postulations, at least qua theoretic devices. Furthermore, if two theories offer identical or at least equally materially adequate empirical predictions, but postulate different entities (or abstract identifications), I still don't think that `arbitrary' is the right word. Rather, I would say that the evidence underdetermines the correct theory, and therefore the more useful postulation.

As for the idea that abstract identifications are against common sense, I have two responses. First, I have serious doubts about whether or not common sense (i.e. the kind even non-philosophers have) has anything to say about ontology, especially the ontologies of the kinds of things you've mentioned. Secondly, I am willing to accept ontologies which seem counterintuitive at first, as long as they come as part of a theory which makes accurate, insightful, and useful predictions about its subject matter.

# on 20 February 2007, 11:26

I agree with Daniel, that 'underdetermined' would be the more accurate description. Just as we speak of "lies, damned lies and statistics", we might speak of "dreams, mirages and scientific models". Scientists may have a feel for the range of validity of a model, just as statisticians understand statistics, but the rest of us need to be unusually careful, especially if we are doing ontology. Identifying matter with sets of superstrings, for example, is surely unlike identifying water with H20, in some important ontological respects.

But I find it tricky to pinpoint the difference (and would welcome advice on that). We might, for example, find ourselves in a situation where as many experts in the former case (particle physicists) as in the latter case (inorganic chemists) are working within such assumptions, for apparently similar kinds of reasons. And of course, all realistic hypotheses are underdetermined by the primary evidence, to some extent; and in many ways that only matters within the context of some particular application.

Of course, the former model is over-represented in the literature, from the point of view of ontology, as even the scientists would recognise. The reasons for its popularity are, to a greater extent than in the latter case, social or methodological, rather than scientific or ontological (although that is also a difficult line to draw). And for a similar reason I am sympathetic to Wo's point about 2 and {{}, {{}}}. The reasons for their identification that were collated by Steinhart (2003, Synthese), for example, whilst showing that that identification is not arbitrary, do not seem to show that they really are the same, or even to be providing much relevant evidence for that ontological identity; they seem only to be reasons for using that particular model of 2 within mathematics.

So, are the numbers really von Neumann ordinals within ZFC, because that is what the scientific experts have reasons to regard them as being, while our common sense intuitions are no more valid than they are in the case of pure water appearing to be perfectly smooth? We don't seem to have good reasons for such a sweeping claim (and perhaps that seeming is no more than common sense).

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.