Teaching for this semester is finally over.
Last week I gave a talk in Umea at a workshop on singular thought. I was pleased
to be invited because I don't really understand singular thought. Giving
a talk, I hoped, would force me to have a closer look at the literature. But then I was
too busy teaching.
People seem to mean different things by 'singular thought'. The target of my
talk was the view that one can usefully understand the representational content
of beliefs and other intentional states as attributing properties to
individuals, without any intervening modes of presentation. This view is often
associated with a certain interpretation of attitude reports: whenever we can
truly say `S believes (or knows etc.) that A is F', where A is a name, then
supposedly the subject S stands in an interesting relation of belief (or
knowledge etc.) to a proposition directly involving the bearer of that name.
I've spent some time this summer upgrading my tree prover. The
new version is here. What's new:
- support for some (normal) modal logics
- better detection of invalid formulas
- faster proof search
- nicer user interface and nicer trees
- cleaner source
I hope there aren't too many new bugs. Let me know if you find one!
Suppose we want our decision theory to not impose strong constraints on
people's ultimate desires. You may value personal wealth, or you may value being
benevolent and wise. You may value being practically rational: you may value
maximizing expected utility. Or you may value not maximizing expected
utility.
This last possibility causes trouble.
If not maximizing expected utility is your only basic desire, and you
have perfect and certain information about your desires, then arguably (although
the argument isn't trivial) every choice in every decision situation you can
face has equal expected utility; so you are bound to maximize expected utility
no matter what. Your desire can't be fulfilled.
I'm generally happy with Causal Decision Theory. I think two-boxing is
clearly the right answer in Newcomb's problem, and I'm not impressed by any of
the alleged counterexamples to Causal Decision Theory that have been put
forward. But there's one thing I worry about. It is what exactly the theory
should say: how it should be spelled out.
Suppose you face a choice between two acts A and B. Loosely speaking, to
evaluate these options, we need to check whether the A-worlds are on average
better than the B-worlds, where the "average" is weighted by your credence on
the subjunctive supposition that you make the relevant choice. Even more
loosely, we want to know how good the world would be if you were to choose A,
and how good it would be if you were to choose B. So we need to know what else
would be the case if you were to choose, say, A.
A common assumption in economics is that utilities are reducible to choice
dispositions. The story goes something like this. Suppose we know what an agent
would choose if she were asked to pick one from a range of goods. If the agent
is disposed to choose X, and Y was an available alternative, we say that the
agent prefers X over Y. One can show that if the agent's choice
dispositions satisfy certain formal constraints, then they are "representable"
by a utility function in the sense that whenever the agent prefers X over Y,
the function assigns greater value to X than to Y. This utility function is
assumed to be the agent's true utility function, telling us how much the agent
values the relevant goods.
In my 2014 paper "Against Magnetism", I
argued that the meta-semantics Lewis defended in "Putnam's Paradox" and pp.45-49
of "New Work" is (a) unattractive, (b) does not fit what Lewis wrote about
meta-semantics elsewhere, and (c) was never Lewis's considered view.
In a
paper forthcoming in the AJP, Frederique Janssen-Lauret and Fraser Macbride
(henceforth, JL&M) disagree with my point (b), and present what they call
"decisive evidence" against (c). Here's my response. In short, I'm not
convinced.
There should be a website (or app) that helps with the following kinds of issues.
- I recently wrote a paper on ability modals in which I sketch some ideas for
how a certain linguistic phenomenon might be compositionally derived. I'm really
unsure about that part of the paper, because I'm not an expert in the relevant
areas of formal semantics. I'd like to get advice from an expert, but none of my
friends are, and I don't want to bother people I don't know.
- I once wrote a paper on decision-theoretic methods in non-consequentialist
ethics. But I don't know much about ethics. I'd need someone to tell me how non-consequentialists typically think about decisions under uncertainty, who has already tried to sell decision-theoretic methods for that purpose,
and what key papers I need to read.
- When I submit papers to journals, I often get rejections pointing out
problems that are easy to fix. It would have been good if someone had pointed
out these problems to me before I submitted the paper.
- I think many of my drafts and papers are a little hard to understand, but
I'm not sure why. I'd like someone to give me feedback on which passages are
confusing, where a reader might get lost, etc.
Basically, I'd like to hire (different kinds of) referees to look over my drafts
and give me constructive feedback.
Last week, I gave a talk in Manchester at a
(very nice) workshop on "David Lewis and His Place in the History of Analytic
Philosophy". My talk was on "Lewis's empiricism". I've now written it up as a
paper, since it got too long for a blog post.
The paper is really about hyperintensional epistemology. The question is how we
can make sense of the kind of metaphysical enquiry Lewis was engaged in if we
accept his models of knowledge and belief, which leave no room for substantive
investigations into non-contingent matters.
I wrote this short
piece for a special issue of the Journal of Consciousness Studies on
Chalmers's "The Meta-Problem
of Consciousness" (2018). Much of my paper rehashes ideas from section 5 of
my "Imaginary
Foundations" paper, but here I try to present these ideas more simply and
directly, without the Bayesian background.
In this
2018 paper, J. Dmitri Gallow shows that it is difficult to combine
multiple deference principles. The argument is a little complicated,
but the basic idea is surprisingly simple.
Suppose A and B are two weather forecasters. Let r be the
proposition that it will rain tomorrow, let A=x be the proposition
that A assigns probability x to r; similarly for B=x. Here are two
deference principles you might like to follow: