Indeterminacy of representation and representation of indeterminacy
Often the factors that determine a phenomenon don't determine it uniquely. Sometimes this changes the phenomenon itself.
Take language. Plausibly, the meanings of our words are somehow determined by patterns of use, but these patterns aren't specific enough to fix, say, a unique extension or intension for our language. There is a range of precise meaning assignments all of which fit our use equally well. One might leave it at that and say that it is indeterminate which of these precise languages we speak. But this misses something. It misses the fact that we don't speak a precise language. For example, in a precise language, "Mount Everest has sharp boundaries" would be true, but in English it is false. The logic of a precise language would (arguably) be classical, but the logic of English is not.
Instead of saying that the semantic value of a word is indeterminate between V1 and V2, it is better to say that the semantic value is something like the set { V1, V2 }. We can then have compositional rules, norms of assertion and other pragmatic rules that operate directly on these sets. After all, it is common knowledge (except among some British philosophers) that our linguistic conventions don't settle precise extensions; so it makes sense to have linguistic conventions about the use of words with indeterminate meanings.
Or take credence. Whatever grounds one's degrees of belief plausibly doesn't fix a unique probability measure for people like us. A whole range of probability measures fit equally well. We might conclude that it is simply indeterminate which of these measures represents our beliefs. But here, too, the indeterminacy changes the phenomenon. If beliefs are indeterminate, every determinate measure misrepresents important aspects of our belief state. It is better to represent the belief state by a set of probability functions. Among other things, this might be useful to explain why people are sometimes reluctant and inconsistent in their decisions.
In the case of language, I think it is actually advantegeous to speak an imprecise language. A language in which every term in every context has a precise extension would be hard to learn, and inconvenient to use.
Some hold that imprecise credences are also advantageous, especially in cases where the evidence is sparse or ambiguous. Any precise credence would then be somewhat arbitrary. Maybe. But what's wrong with arbitrariness? By analogy, I think the moral and rational norms on desires (utilities) are even more unconstrained than the norms for what to belief in response to evidence; but it doesn't follow that people should have radically "imprecise utilities". In fact, it would be quite bad to have radically imprecise utilities: it would make it hard to act in a sensible, consistent manner. The truth is that you're allowed to have any arbitrary utility function that meets the moral and rational constraints.
Anyway.
In cases where indeterminacy changes a phenomenon, one should resist the temptation to impose determinacy. When supervaluationists about meaning define truth as truth-on-all-sharpenings, they try to put a sharp clothing onto our unsharp language. Similarly, it is tempting to apply linear averaging to a set of credence functions, thereby turning it into a precise function. In either case, I think that's a mistake. The supervaluationist truth-conditions don't fit our practice; nor do the averaged credences. For example they make nonsense of the precautions we might have taken to act on imprecise credences, and they evolve in mysterious ways if the underlying set of probabilities updates by conditionalization.