< 369 older entriesHome406 newer entries >

First bunch of question: properties and semantic values

This is a follow-up to yesterday's entry.

Andy Egan argues that functions from worlds and times to sets of things are ideally suited as semantic values of predicates, even better than mere sets of things.

I agree, and so would Lewis. In fact, Lewis would say that functions from worlds and times are still too simple to do the job of semantic values. There are more intensional operators in our language than temporal and modal operators. Among others, there are also spatial operators and precision operators ("strictly speaking"). So our semantic values for predicates should be functions from a world, a time, a place, a precision standard and various other 'index coordinates' to sets of objects. This is more or less what Lewis assigns to common nouns in "General Semantics" (see in particular §III). Other predicates like "is green" that do not belong to any basic syntactic category get assigned more complicated semantic values: functions from functions from indices to things to functions from indices to truth values. In later papers, Lewis argues that we may need several of the world and time coordinates and, more importantly, a further mapping that accounts for context-dependence (and to deliver the kind of truth-conditions needed in his theory of linguistic conventions). Thus for predicates, we get something like a function from centered worlds to functions from functions from possibly several worlds, times, places, precision standards, etc. to functions from such worlds, times etc. to truth values. (Alternatively, if we go for the 'moderate external strategy' (Plurality) and reserve "semantic value" for 'simple, but variable semantic values' ("Index, Context and Content"), we can say that the semantic value of a predicate in a given context is the value of the function just mentioned for that context.)

Egan and Lewis on Properties

Andy Egan, in "Second-Order Predication and the Metaphysics of Properties", argues that there is a bug in Lewis' theory of properties which can be fixed by identifying properties not just with sets but with functions from worlds (and times) to sets. I disagree: there is no bug. But there are some interesting questions about Lewisian properties nearby.

Here's the alleged bug. Consider the second-order property being somebody's favourite property. This property belongs to Green. So on Lewis' account, Green is a member of the set being somebody's favourite property. But at another possible world, Green is nobody's favourite property. So it is not a member of that set. Contradiction. In the parallel case of accidental properties of individuals, Lewis resorts to counterpart theory: If Graham Greene is a writer in our world and not in another world, that's not because Greene both is and isn't a member of the set writer, but because Greene is a member while one of his counterparts isn't. However, this solution doesn't work for Green because properties don't have counterparts.

Strong necessities and reductive theories of modality

I would like to believe that all necessary truths fall into the following two kinds.

1. Analytic truths. By processing the semantic content of such a sentence we can find out that its truth conditions are universally satisfied, no matter how the world turns out and no matter what other world we talk about.

2. Truths whose evaluation at other worlds depends on contingent features of the actual situation. What we can know by linguistic processing is that if these features are so as to make such a sentence true, then it remains true even when we talk about other worlds, that is, when the sentence is embedded in "at world such-and-such" or "necessarily". For example, if we know that there are sheep, we can figure out that "actually, there are sheep" is necessary, because it is a rule of our language that (roughly) "actually p" is true at a world w iff p is true at the actual world. Knowledge about ordinary, contingent features of the current situation together with linguistic competence always suffices to learn that these a posteriori necessary sentences are true.

Descriptive knowledge and shared reference

Some forms of descriptivism say that when I utter a sentence with a proper name in it, communication only succeeds if there is a description, a set of properties, you and I both associate with that name. But often such descriptions are hard to find, so some conclude that instead it suffices if you and I refer to the same object with that name, no matter what properties mediate our reference or if it is mediated by associated properties at all.

In fact, shared reference doesn't quite suffice for successful communication. We should also require that the shared reference is common knowledge. If I tell you that Ljubljana is pretty but you have no idea whether by "Ljubljana" I refer to the town you call "Ljubljana" or whether instead I refer to my neighbour or the moon, you don't understand what I'm trying to tell you.

Temporary Membership

OK, back. The bike trip was cool.

Passo dello Stelvio


Meanwhile, in the comments, David Sanford raised the question whether sets gain and lose members. One might say yes, for arguably

*) If y is the set of all Fs, then x is a member of y iff x is F.

Since the wall in front of me is white and there is a set of white things, by (*), the wall is a member of that set. But last year, that wall was green, and surely it was never the case that something green was a member of the set of white things; so the wall was not a member of that set last year. It follows that the set of white things gained a member when I painted the wall.

Absence Note

Not much happening here this summer. I've been busy organizing my near future, traveling around and writing too many other things (as I can feel in my arms). But regular scheduling should continue sometime soon, probably in August. Perhaps I'll even manage to work through all my emails. Anyway, this is just a note that for the next 12 days or so I will be even more absent because I'm in the Alps again (without internet access).

What are debates about mereology about?

If meaning is largely determined by use and inferential connections, then if a word is used very differently in two groups of people and if the two groups accept very different inferential connections, then the word does not mean the same thing in those groups.

On this account, mereological nihilists don't mean the same by mereological vocabulary as I (a universalist) do: they reject all ordinary examples of parthood, overlap etc.; they reject some of the most central theoretical principles governing these notions; and they ask unintelligible quesions, like:

Gettier Cases in Mathematics and Metaphysics

I once believed that in non-contingent matters, knowledge is true, justified belief. I guess my reasoning went like this:

How do we come to know, say, metaphysical truths? Not by direct insight, usually. Nor by simple reflection on meanings, sometimes. Rather, we evaluate arguments for and against the available options, and we opt for the least costly position. If that's how we arrive at a metaphysical belief, the belief is clearly justified -- we have arguments to back it up. But it may not be knowledge: it may still be false. Metaphysical arguments are hardly ever conclusive. But suppose we're lucky and our belief is true. Then it's knowledge: what more could we ask for? Surely not any causal connection to the non-contingent matters.

But now that Antimeta has asked for Gettier cases in mathematics, it seems to me that there are perfectly clear examples (I've posted a comment over there, but it seems to have gone lost):

From Chance to Credence

Lewis argues that any theory of chance must explain the Principal Principle, which says that if you know that the objective chance for a certain proposition is x, then you should give that proposition a credence close to x. Anyone who proposes to reduce chance to some feature X, say primitive propensities, must explain why knowledge of X constrains rational expectations in this particular way.

How does Lewis's own theory explain that?

On Lewis's theory, the chance of an event (or proposition) is the probability-value assigned to the event by the best theory. Those 'probability-values' are just numerical values: they are not hypothetical values for some fundamental property; they need not even deserve the name "probability". However, one requirement for good theories is that they assign high probability-values to true propositions. Other requirements for good theories are simplicity and strength. The best theory is the one that strikes the best compromise between all three requirements. So the question becomes: why should information that the best theory assigns probability-value x to a proposition constrain rational expectations in the way the Principal Principle says?

Knowledge of laws and knowledge of naturalness

Some accounts of laws of nature make it mysterious how we can empirically discover that something is a law.

The accounts I have in mind agree that if P is (or expresses) a law of nature, then P is true, but not conversely: not all truths are laws of nature. Something X distinguishes the laws from other truths; P is a law of nature iff P is both true and X. The accounts disagree about what to put in for X.

Many laws are general, and thus face the problem of induction. Limited empirical evidence can never prove that an unlimited generalalization is true. But Bayesian confirmation theory tells us how and why observing evidence can at least raise the generalization's (ideal subjective) probability. The problem is that for any generalization there are infinitely many incompatible alternatives equally confirmed by any finite amount of evidence: whatever confirms "all emeralds are green" also confirms "all emeralds are grue"; for any finite number of points there are infinitely many curves fitting them all, etc. When we do science, we assign low prior probability to gerrymandered laws. We believe that our world obeys regularities that appear simple to us, that are simple to state in our language (including our mathematical language). Let's call those regularities "apparently simple", and the assumption that our world obeys apparently simple regularits "the induction assumption".

< 369 older entriesHome406 newer entries >