Knowledge of laws and knowledge of naturalness
Some accounts of laws of nature make it mysterious how we can empirically discover that something is a law.
The accounts I have in mind agree that if P is (or expresses) a law of nature, then P is true, but not conversely: not all truths are laws of nature. Something X distinguishes the laws from other truths; P is a law of nature iff P is both true and X. The accounts disagree about what to put in for X.
Many laws are general, and thus face the problem of induction. Limited empirical evidence can never prove that an unlimited generalalization is true. But Bayesian confirmation theory tells us how and why observing evidence can at least raise the generalization's (ideal subjective) probability. The problem is that for any generalization there are infinitely many incompatible alternatives equally confirmed by any finite amount of evidence: whatever confirms "all emeralds are green" also confirms "all emeralds are grue"; for any finite number of points there are infinitely many curves fitting them all, etc. When we do science, we assign low prior probability to gerrymandered laws. We believe that our world obeys regularities that appear simple to us, that are simple to state in our language (including our mathematical language). Let's call those regularities "apparently simple", and the assumption that our world obeys apparently simple regularits "the induction assumption".
It is hard to justify the induction assumption, given that it is false at so many worlds. Notice that it wouldn't help to know that our predicates trace objective, non-contingent boundaries, that they 'carve nature at its joints'. There will still be all those irregular worlds where emeralds are grue and E=mc^2 except for m=187.2N on Tuesdays, in which case E=mc^3. Even if we knew that our words express perfectly natural properties, we'd still lack any reason to believe that we are not in one of those worlds.
Anyway, I don't want to talk about induction. (I mentioned all this only because I will need it later.) Let's take it for granted that apparently simple laws are more likely to be true than apparently gerrymandered alternatives. Then we can see how and why empricial evidence makes certain (assumptions about) laws probable, and that's all we can reasonably ask for. But what the evidence makes probable is only that the laws are true. But for the laws to be laws, they also need to be X. How can we find that out?
Well, it depends on X. If X says that for P to be (or express) a law, it must satisfy such-and-such syntactic constraints, that's easy to check. However, no syntactic criteria will do for X. (Maybe some syntactic conditions are necessary for lawhood: that laws must be universal quantifications (unlikely), or that laws must not be theorems of first-order logic. Then let X be the remaining features that distinguish the laws of nature from other truths meeting those syntactic conditions.) If X says that P must play a certain role in our theorizing, that's also relatively easy to find out. But again, this can't work: it rules out the possibility of unknown laws.
A better candidate for X is that P must be a law of nature. This makes the definition of laws circular, but who said that "law of nature" has a reductive analysis? Maybe it does not. (We can hide the circularity by demanding e.g. that P be nomologically or physically necessary, but that doesn't really make a difference.)
But then how can we find out whether P has this primitive property of lawhood? Does it help to discover, inductively, that P is true? It seems not. If lawhood is primitive, it is logically independent of being a true regularity. Could the discovery at least make it probable that P has lawhood? I don't see how. Perhaps whatever confirms that P is true also confirms that P has lawhood, because -- mysteriously -- the primitive property lawhood only belongs to propositions when they are true. But this will not make it probable that P has lawhood unless the prior probability of P having lawhood 1) exceeds the prior probability for P's alternatives to have lawhood, and 2) exceeds the prior probability for P being a lawless truth. We could of course strengthen our induction assumption to also cover (1) and (2). But this just triples the embarrassment of relying on the assumption.
Can we directly observe whether P has lawhood, or do we have some other basic cognitive faculty for finding that out? One might say so because lawhood is intimately connected with counterfactual conditionals, objective chance and causality. And some philosophers have claimed that we can see or otherwise directly recognize counterfactuals or causal relations. Nevertheless, I will ignore this possibility. (It seems incredible to me that Einstein made use of any super-sensory empirical faculties when he developed the GTR. It seems even more incredible that we make use of any such faculties when we judge that E=mc^2 is a law of nature.)
For the same reason, I also believe it will make no difference to the present problem if one uses primitive counterfactuals, primitive chance or primitive causation instead of primitive lawhood in the characterization of X. Nor will it make a difference to employ primitive, contingent relations between universals: we don't have a basic epistemic faculty for discovering these relations.
For a while, I thought the Mill-Ramsey-Lewis account of laws faced the same problem, but now I'm not sure any more.
On the Mill-Ramsey-Lewis account, X says that P should be a consequence of the best overall theory, the one that strikes the best balance between simplicity, strength and fit. On first sight, this looks good, because we can easily check whether P is a consequence of our current theories. If it does, then to the degree that we can confidently assume that the best theory is not too different from our current theories -- in particular with respect to the parts that entail P --, we can also confidently assume that P is a law.
The problem is whether we can know about the simplicity of theories. We can certainly check the logical and mathematical complexity of our theories expressed in our own language. But to make simplicity objective, Lewis demands that a theory's simplicity is determined by its logical complexity when expressed in a language whose predicates exclusively express perfectly natural properties. So in order to know whether our current theories are reasonably simple, we must know whether our predicates express reasonably natural properties, that is, whether apparently natural regularities are objectively natural. But there are reasons to doubt that we can know that.
First, Lewis claims that it is up to science, not philosophy, to discover the objectively natural properties. So if we wonder which of our predicates express natural properties (or properties easily definable in terms of natural ones), we should look at the predicates occuring in our basic scientific theories. But this is clearly circular if we want to find out whether the predicates in those theories are natural.
Second, there are people in logical space whose language only contains relatively unnatural predicates. Maybe some kind of reference magnetism excludes languages with extremely unnatural predicates, but no credible theory of reference can exclude predicates as unnatural as "grue", which will suffice. Those people with their gruesome predicates (and concepts, we may assume) might state theories and hypotheses about their world. They might find theories that are completely true and strong and apparently simple (apparently for them). But those theories will not contain the laws of nature. There seems to be no way for those people to find out that they are wrong. So how can we be confident that we do not share their fate?
Third, suppose Lewis is right and there are worlds that differ from our world by a permutation of perfectly natural properties (over all of spacetime), e.g. worlds where one of the quark colours and one of the quark flavours have traded places. Then there arguably also exist worlds where only half of the quark colour instances have been permuted with only have of the flavour instances, etc. That way, we arrive at worlds where things (occupants of the proton role, say) seem to be perfect duplicates even though they really are a gerrymandered mess. The inhabitants of those worlds will probably be mislead into postulating fundamental properties for all occupants of the proton role, just like we postulate fundamental properties for our protons. But their predicates will not express perfectly natural properties, but complicated, unnatural disjunctions. How can we be confident that we are not like them? Aren't some of them actually in situations epistemically indistinguishable from our own situation?
I used to believe that Lewis should treat these possibilities as sceptical scenarios. Just as we assume that we do not halluzinate all the time and that our world obeys apparently simple regularities, so we assume that the apparently simple properties are objectively simple. Then Lewis's account of laws of natures would do just as badly in explaining how we can discover the laws as its alternatives.
(Barry Loewer somewhere pointed out that ignorance of naturalness is in a way much stranger than ignorance in ordinary sceptical scenarios: there, we are usually wrong about simple, local facts like whether we have hands or whether the sun will rise tomorrow. By contrast, our ignorance of naturalness would not go away even if we knew all truths in our language that express simple, local facts. (It might go away if we knew truths about which predicates express natural properties, or about the laws of natures, or about intrinsic similarities, or about counterfactuals.))
But I'm not sure any more if scpeticism about naturalness is really coherent. Didn't we introduce the concept of naturalness in part by pointing out that, say, the round things are objectively more similar to one another than the round-or-red things? Does it then make sense to wonder whether the round things really comprise a more natural class than the round-or-red things, rather than the other way rounf?
We could have introduced some primitive distinction among properties, without specifying what it is. Then we could have used it to define "objective similarity", "intrinsic" etc. But all that would be pointless if the so-defined "objective similarity" had nothing to do with what we usually call "similarity", and "intrinsic" with our usual concept of intrinsicness.
If on the other hand "natural" is introduced as roughly corresponding to apparent naturalness, then we can be certain a priori that the natural properties roughly correspond to the apparently natural properties.
If that makes sense then even though the sceptical scenarios mentioned above exist -- there are all those people getting the laws of nature wrong because they speak unnatural languages --, it is a priori that they are not actual, that we are not among those people.
Do you want to say that the people who speak unnatural languages cannot introduce the concept of naturalness?