< 628 older entries | Home | 154 newer entries > |
Overlapping acts
I'm currently teaching a course on decision theory. Today we discussed chapter 2 of Jim Joyce's Foundations of Causal Decision Theory, which is excellent. But there's one part I don't really get.
Joyce mentions that Savage identifies acts with functions from states to outcomes, and that Jeffrey once suggested representing such functions as conjunctions of material conditionals: for example, if an act maps S1 to O1 and S2 to O2, the corresponding proposition would be (S1 → O1) & (S2 → O2). According to Joyce, this conception of acts "cannot be correct" (p.62). That's the part I don't really get.
Philosophical models and ordinary language
A lot of what I do in philosophy is develop models: models of rational choice, of belief update, of semantics, of communication, etc. Such models are supposed to shed light on real-world phenomena, but the connection between model and reality is not completely straightforward.
For example, consider decision theory as a descriptive model of real people's choices. It may seem straightforward what this model predicts and therefore how it can be tested: it predicts that people always maximize expected utility. But what are the probabilities and utilities that define expected utility? It is no part of standard decision theory that an agent's probabilities and utilities conform in a certain way to their publicly stated goals and opinions. Assuming such a link is one way of connecting the decision-theoretic model with real agents and their choices, but it is not the only (and in my view not the most fruitful) way. A similar question arises for the agent's options. Decision theory simply assumes that a range of "acts" are available to the agent. But what should count as an act in a real-world situation: a type of overt behaviour, or a type of intention? And what makes an act available? Decision theory doesn't answer these questions.
Beliefs, degrees of belief, and earthquakes
There has been a lively debate in recent years about the relationship between graded belief and ungraded belief. The debate presupposes something we should regard with suspicion: that there is such a thing as ungraded belief.
Compare earthquakes. I'm not an expert on earthquakes, but I know that they vary in strength. How exactly to measure an earthquake's strength is to some extent a matter of convention: we could have used a non-logarithmic scale; we could have counted duration as an aspect of strength, and so on. So when we say that an earthquake has magnitude 6.4, we characterize a central aspect of an earthquake's strength by locating it on a conventional scale.
Validity judgments
Philosophers (and linguists) often appeal to judgments about the validity of general principles or arguments. For example, they judge that if C entails D, then 'if A then C' entails 'if A then D'; that 'it is not the case that it will be that P' is equivalent to 'it will be the case that not P'; that the principles of S5 are valid for metaphysical modality; that 'there could have been some person x such that actually x sits and actually x doesn't sit' is an unsatisfiable contradiction; and so on. In my view, such judgments are almost worthless: they carry very little evidential weight.
Reduction and coordination
The following principles have something in common.
Conditional Coordination Principle.
A rational person's credence in a conditional A->B should equal the ratio of her credence in the corresponding propositions B and A&B; that is, Cr(A->B) = Cr(B/A) = Cr(B)/Cr(A&B).
Normative Coordination Principle.
On the supposition that A is what should be done, a rational agent should be motivated to do A; that is, very roughly, Des(A/Ought(A)) > 0.5.
Probability Coordination Principle.
On the supposition that the chance of A is x, a rational agent should assign credence x to A; that is, roughly, Cr(A/Ch(A)=x) = x.
Nomic Coordination Principle.
On the supposition that it is a law of nature that A, a rational agent should assign credence 1 to A; that is, Cr(A/L(A)) = 1.
All these principles claim that an agent's attitudes towards a certain kind of proposition rationally constrain their attitudes towards other propositions.
Do laws explain regularities?
Humeans about laws of nature hold that the laws are nothing over and above the history of occurrent events in the world. Many anti-Humeans, by contrast, hold that the laws somehow "produce" or "govern" the occurrent events and thus must be metaphysically prior to those events. On this picture, the regularities we find in the world are explained by underlying facts about laws. A common argument against Humeanism is that Humeans can't account for the explanatory role of laws: if laws are just regularities, then then laws can't really explain the regularities — so the charge — since nothing can explain itself.
Confirmation and singular propositions
In discussions of the raven paradox, it is generally assumed that the (relevant) information gathered from an observation of a black raven can be regimented into a statement of the form Ra & Ba ('a is a raven and a is black'). This is in line with what a lot of "anti-individualist" or "externalist" philosophers say about the information we acquire through experience: when we see a black raven, they claim, what we learn is not a descriptive or general proposition to the effect that whatever object satisfies such-and-such conditions is a black raven, but rather a "singular" proposition about a particular object -- we learn that this very object is black and a raven. It seems to me that this singularist doctrine makes it hard to account for many aspects of confirmation.
Small formulas with large models
Take the usual language of first-order logic from introductory textbooks, without identity and function symbols. The vast majority of sentences in this language are satisfied in models with very few individuals. You even have to make an effort to come up with a sentence that requires three or four individuals. The task is harder if you want to come up with a fairly short sentence. So I wonder, for any given number n, what is the shortest sentences that requires n individuals?
Belief update: shifting, pushing, and pulling
It is widely agreed that conditionalization is not an adequate norm for the dynamics of self-locating beliefs. There is no agreement on what the right norms should look like. Many hold that there are no dynamic norms on self-locating beliefs at all. On that view, an agent's self-locating beliefs at any time are determined on the basis of the agent's evidence at that time, irrespective of the earlier self-locating belief. I want to talk about an alternative approach that assumes a non-trivial dynamics for self-locating beliefs. The rough idea is that as time goes by, a belief that it is Sunday should somehow turn into a belief that it is Monday.
< 628 older entries | Home | 154 newer entries > |
Recent Comments
- David Duffy on Mental content and functional role
- The "bubble puzzle" doesn't seem too different from various supernatural beliefs...
- arc on Kocurek on chance and would
- I think I sort of see. Maybe my next question is: where's the problem with the reasoning with...
- Jitai Zhang on Paper on unspecific antecedents
- Thank you for the reply! I see your point, sorry for my earlier confusion. But this still leaves...
- wo on Kocurek on chance and would
- Hello! Thanks for the feedback, and sorry about being unclear. My point about (1) vs (1H) was...
- wo on Paper on unspecific antecedents
- Hello! I'm afraid I don't follow. My claim is not that (8a) is true, only that (8a) appears...
- arc on Kocurek on chance and would
- Thanks for the comments! Lots to think about, not sure I've absorbed it all yet. (Only came...
- Jitai Zhang on Paper on unspecific antecedents
- A minor question: On page 6, you writes, "For a more controlled example, imagine I’m about...
- David Duffy on An argument against conditional accounts of ability
- "the second randomizes" - this seems all tangled up with arguments about determinism and...