A desire that thwarts decision theory
Suppose we want our decision theory to not impose strong constraints on people's ultimate desires. You may value personal wealth, or you may value being benevolent and wise. You may value being practically rational: you may value maximizing expected utility. Or you may value not maximizing expected utility.
This last possibility causes trouble.
If not maximizing expected utility is your only basic desire, and you have perfect and certain information about your desires, then arguably (although the argument isn't trivial) every choice in every decision situation you can face has equal expected utility; so you are bound to maximize expected utility no matter what. Your desire can't be fulfilled.
Let's look at the more realistic case where you also care about other things. Let's say you value not maximizing expected utility (in general, or perhaps only on a particular occasion), but you also like apples. To simplify the following argument, let's assume your desire not to maximize expected utility is stronger than, and independent of, your desire to have an apple.
Now, facing a choice between an apple and a banana, what should you do?
Suppose that choosing the apple uniquely maximizes expected utility. Then choosing the apple has a good feature and a bad feature, and the bad feature weighs more heavily. Choosing the banana has neither the good feature nor the bad feature. So choosing the banana is better: it maximizes expected utility. Contradiction.
Suppose choosing the banana uniquely maximizes expected utility. Then choosing the apple has two good features and choosing the banana has none. So choosing the apple is better. Contradiction.
Suppose the two options both maximize expected utility. Then both options have a bad feature, and one of them has a good feature. By assumption, the good feature is independent of ("separable from") the bad feature. It follows that the option with the good feature is better. Contradiction.
It seems that at least one of your options can't have a well-defined expected utility.
(Most of the assumptions I've made could be weakened, I think: we don't need full separability; your desire not to maximize expected utility doesn't need to be stronger than your other desires; you don't need to be certain of your desires.)