When experts disagree on probabilities
A coin is to be tossed. Expert A tells you that it will land heads with probability 0.9; expert B says the probability is 0.1. What should you make of that?
Answer: if you trust expert A to degree a and expert B to degree b and have no other relevant information, your new credence in heads should be a*0.9 + b*0.1. So if you give equal trust to both of them, your credence in heads should be 0.5. You should be neither confident that the coin will land heads, nor that it will land tails. -- Obviously, you shouldn't take the objective chance of heads to be 0.5, contradicting both experts. Your credence of 0.5 is compatible with being certain that the chance is either 0.1 or 0.9. Credences are not opinions about objective chances.
What if the two experts didn't mean objective chance, but subjective probability? That is, what if you learned that expert A is pretty confident that the coin will land heads, and expert B that it will land tails? Your response should be the same. If you trust them equally, your credence should be 0.5.
What if the two experts weren't talking about their credence, given their own evidence and priors, but about what your credence should be, given your evidence and your priors?
If you are ideally rational and know that you are, then you should dismiss their claims. For suppose that beforehand, you assigned to heads credence x, taking into account all your evidence. If some alleged expert now tells you that your credence, given that evidence, ought to be some other value, you know for certain that they are wrong. You should stick to whatever your old credence was. (I assume that the expert's claim about what is supported by your evidence doesn't affect the coin toss by your lights.) Notice that when dismissing the experts' claims, you are still applying the "weighted sum" rule, with a = b = 0.
What if you're not an ideal agent, and know it? Then you can't rule out that you may have misinterpreted your own evidence, or otherwise leaped to an irrational conclusion. One expert says your evidence strongly supports heads, the other says it supports tails, but you can't tell which. Then, too, you should apply the "weighted sum" rule: you should neither be very confident in heads nor in tails. Expert A's claim is evidence that you have strong evidence for heads, and expert B's claim is evidence that you have strong evidence for tails. Since evidence for strong evidence for heads is just evidence for heads, you end up with some evidence for heads, and some for tails, and you should balance the two by their strength. Of course, if you believe that one of the experts is right, then you know that you should have a different credence, 0.9 or 0.1; you would have this other credence if you were ideal. But since you're not ideal, you should at least properly respond to your evidence this time, rather than make another irrational leap of credence.
So the weighted sum rule seems correct in all cases.
Your averaging rule has a peculiar consequence. Expert A thinks coin flips are independent with probability .9 and expert B thinks they are independent with probability .1. So they agree that the flips are independent. The coin is to be flipped twice. A assigns P(h1&h2)=.81 and B assigns P(h1&h2)=.01. You trust them equally so you assign probabilities P(h1)=.5, P(h2)=.5 and P(h1&h2)=.41; i.e. though the experts think that the flips are independent if you follow the averaging rule you end up not agreeing with them. Lehrer and Wagner sugested a more complicated averaging rule years ago with the same unwelcome consequence.