Practical irrationality or epistemic irrationality?
It is well-known that humans don't conform to the model of rational choice theory, as standardly conceived in economics. For example, the minimal price at which people are willing to sell a good is often much higher than the maximal price at which they would previously have been willing to buy it. According to rational choice theory, the two prices should coincide, since the outcome of selling the good is the same as that of not buying it in the first place. What we philosophers call 'decision theory' (the kind of theory you find in Jeffrey's Logic of Decision or Joyce's Foundations of Causal Decision Theory) makes no such prediction. It does not assume that the value of an act in a given state of the world is a simple function of the agent's wealth after carrying out the act. Among other things, the value of an act can depend on historical aspects of the relevant state. A state in which you are giving up a good is not at all the same as a state in which you aren't buying it in the first place, and decision theory does not tell you that you must assign equal value to the two results.
Other failures of conformity to the rational choice model are not as easy to explain away. Consider the alleged phenomenon that people overweight small probabilities: people buy lottery tickets at unfavourable odds; they prefer a settlement in which they get $90000 over a trial in which they have a 99% chance of getting $100000; they pay a high price to avoid flying with an airline that has a slightly higher, but still miniscule, frequency of crashes. Many of these behaviours could be explained as entirely reasonable. Why shouldn't people value the elimination of risk, or enjoy the mere possibility of winning a lottery, even if the chances are low? Again, these values are not a function of the "outcomes" understood in the narrow sense of rational choice theory, but they are easily represented in decision theory. And it is independently plausible that we do value the elimination of risk and enjoy mere possibilities of great events.
But I don't think this covers all the cases. People who buy lottery tickets often really seem to overestimate the probability of winning. People who prefer the more expensive airline often display other signs that they really do attach significant degree of belief to dying in a plane crash: they're not just paying extra for the peace of mind; they pay extra to reduce the risk. But of course in other contexts they would never pay a similar surcharge to avoid a similarly miniscule risk. So it seems right that our behaviour doesn't always reflect what we know about the chances.
But is this a practical failure of rationality, a mismatch with the norms of decision theory? I don't think so. Here another difference between decision theory and rational choice theory becomes important. In rational choice theory, expected utility is usually computed relative to objective probabilities. For example, if you consider a bet on a fair coin toss which would give you $1 on heads and $0 on tails, the expected payoff is calculated as 1/2 times $1 plus 1/2 times $0, because both outcomes have objective probability 1/2. In decision theory, the probabilities that enter into the calculation of expected utilities are degrees of belief. (This is why decision theory, unlike rational choice theory, doesn't distinguish "decisions under risk" and "decisions under uncertainty".) So we must ask: when people irrationally pay a lot to reduce the risk of a dying in a plane crash, do they fail to act in accordance with their degrees of beliefs and desires, or is there rather something wrong with those beliefs and desires? I think it is very plausible that the main problem lies in the beliefs and desires, not in the way they are turned into actions. As I said above, those who overweight the probability of plane crashes attach unreasonably high degree of belief to these events. They do so even when they are informed about the objective probabilities. Their irrationality is epistemic, it is a failure to conform to the Principal Principle. It is not a failure to act in accordance with their (misguided) beliefs and desires.
There is a lot of evidence that humans are generally quite bad at processing information about probabilities. We have no clear grasp of the difference between chance 0.6 and 0.7, or between 0.01 and 0.0001. We find it difficult and unnatural to follow Bayes's Theorem. We ignore base rates and open the wrong door in the Monty Hall problem. At root, these are epistemic failures, although they often lead to inadequate choices.
Perhaps the same can even be said about framing effects. People often make different choices depending on how one and the same decision problem is described. They are more likely to choose an option with a "90% chance of success" than an option with a "10% chance of failure"; they are more willing to put at risk 1000 lives when the alternative is described as "500 will die" rather than "500 will survive". Clearly something is going wrong. But is it a failure in decision making, so that we should make room for it in a descriptively adequate theory of decisions? Perhaps it is again better understood as a failure in processing the relevant information.
This time, the issue is more subtle because the crucial attitudes are desires, not beliefs. When given the positive description of a given scenario (success, lives saved), you arguably assign it higher desirability than when given the negative description. However, one might say that the computation of desirability is an aspect of practical rationality, because ultimately practical rationality is to always choose the most desirable option. On the other hand, isn't the reason why you assign an overly high desirability to the scenario that you didn't fully understand what it involves? It's not like you were perfectly aware of the relevant proposition, but made a mistake when calculating its desirability. The presentation-dependence rather shows that you weren't clear about which proposition is at issue in the first place. Arguably, the bug lies in the "module" that processes the content of the descriptions, not in the module that turns your attitudes into actions.
The distinction between practical and epistemic rationality is not always clear. But it is a useful distinction. When presented with a case in which people allegedly violate the norms of decision theory, it is often worth asking whether the problem really concerns the output of our attitudes and not their input.