As we saw earlier in this chapter, Bayesianism offers at least two important advantages over previous theories of confirmation. First, it provides a way to quantify the degree of confirmation new evidence confers on a hypothesis.The Bayesian multiplier quantifies how your credence in a hypothesis should change when you encounter new evidence. In the next chapter, we’ll see how a related quantity called the Bayes factor provides a measure of evidential strength. Second, it explains why successful predictions that are *surprising* or *unexpected* confirm a hypothesis more strongly than unsurprising ones do.See this page for the explanation. Interestingly, however, Bayesianism yields a few surprises of its own. In particular, it calls into question the central intuition of hypothetico-deductivism—namely, that hypotheses are always confirmed by their successful predictions and disconfirmed by failed predictions. As it turns out, Bayesianism implies that the opposite can happen: sometimes, a hypothesis is confirmed by a *failed* prediction or disconfirmed by a *successful* prediction!

These surprising implications arise because evidence may be relevant to multiple hypotheses at once, as we saw on the previous page. Even if some hypothesis H_{1} predicts event E, the occurrence of E might nonetheless *disconfirm* H_{1} if there is some rival hypothesis H_{2} that predicts E with greater confidence. For example:

Suppose H_{1} predicts E with 80% confidence. That is, the conditional probability of E given H_{1} is 80%:
_{2}, which predicts E with 90% confidence:

pr(E|H_{1}) = .8

pr(E|H_{2}) = .9

When we learn that E has occurred, our credences should shift in favor of H_{2}, thereby disconfirming H_{1}. Although *both* hypotheses *predicted* E, only the hypothesis that predicted E more confidently is confirmed; the other hypothesis is disconfirmed. We can use Bayes bars to illustrate this result visually. Suppose your credence in each of the two hypotheses, prior to learning E, was 50%. Your prior Bayes bar looks like this:

prior Bayes bar |
|||

(H_{1}•E) |
(H_{2}•E) |
||

40% | 10% | 45% | 5% |

The shaded blue and shaded yellow segments represent (H_{1}•~E) and (H_{2}•~E), respectively, both of which are eliminated when you conditionalize on E. Only the unshaded segments remain:

(H_{1}•E) |
(H_{2}•E) |

40% | 45% |

After renormalizing (stretching the bar back to full length), your posterior credences are as follows:

posterior Bayes bar |
|||

(H_{1}•E) |
(H_{2}•E) |
||

47.1% | 52.9% |

Your credence in H_{1} has *decreased* from 50% to about 47%The length of the truncated bar is 85% or .85, and H_{1} occupies 40% of that length—that is, .4/.85 of the total. So, the posterior probability of H_{1} is .4/.85, which is approximately .471 or 47.1% when you learned E, even though H_{1} *predicted* E with 80% confidence. Thus, hypothesis H_{1} has been disconfirmed by its successful prediction!

Conversely, a hypothesis can be *confirmed* by a *failed* prediction. That is, even if a hypothesis makes some event unlikely, the event might nonetheless provide evidence *supporting* the hypothesis in question! To see how, simply conditionalize on ~E instead of on E in the preceding example. H_{1} predicts that E probably will occur, so if E doesn’t occur, that will be a failed prediction. Nevertheless, the probability of H_{1} will increase because ~E strongly disconfirms the rival hypothesis H_{2}.Your posterior credences in H_{1} and H_{2} will be ⅔ and ⅓, respectively, or approximately 66.7% and 33.3%. I’ll leave the Bayes bars as an exercise for the reader. Here’s a hint: start by eliminating the light blue and light grey segments of the prior Bayes bar (where E is true), and you’ll notice that the remaining dark blue segment (where H_{1} is true) is twice as long as the dark grey segment (where H_{2} is true).

Thus, Bayesianism reveals that evaluating the predictions of a single hypothesis or theory, in isolation, is the wrong way to think about confirmation. Even if a hypothesis makes some observation unlikely, it does not follow that the observation disconfirms the hypothesis. It might, in fact, support the hypothesis over its rivals. Conversely, even if a theory makes a successful prediction, it doesn’t follow that this success confirms the theory. It may even disconfirm the theory by supporting alternative theories.

Moreover, a hypothesis can be confirmed or disconfirmed by evidence, even if the hypothesis yields no specific predictions at all. In cases where some hypothesis H gives us no reason to expect some event E, nor any reason to think that E won’t occur, the occurrence (or non-occurrence) of E might still provide relevant evidence about H by confirming or disconfirming rival hypotheses. For example, returning to the police detective scenario discussed on the previous page, suppose you learn the following fact:

W: Shortly before his death, the businessman signed a will naming his niece sole heir to his estate.

The accident hypothesis gives you no reason to expect that the businessman would or wouldn’t have done such a thing. Your conditional credence in W, assuming the accident hypothesis, might be close to 50%. In other words, the accident hypothesis doesn’t predict W, nor does it predict ~W. Nonetheless, W seems to provide strong evidence against the accident hypothesis. It disconfirms the accident hypothesis by supporting the alternatives: perhaps the man planned to commit suicide and wanted to ensure that his niece would be well cared for, or perhaps someone—maybe the niece herself—murdered him so that she would inherit his money. For these reasons, your credences in the suicide hypothesis and the murder hypothesis might increase significantly upon learning W. Correspondingly, your credence in the accident hypothesis must decrease, even though the accident hypothesis itself made no predictions about the truth or falsity of W.

The foregoing examples illustrate how Bayesianism illuminates important aspects of confirmation that were previously obscure. Bayesianism also avoids the main pitfalls of Hempel’s theory and hypothetico-deductivism, as we saw earlier. Clearly, it is a better theory of confirmation than its predecessors. On the other hand, Bayesianism faces some thorny problems too, and it has its share of detractors.For critical perspectives, see Clark Glymour, “Why I Am Not a Bayesian,” in his *Theory and Evidence* (Princeton: Princeton University Press, 1980); John Earman, *Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory* (Cambridge, MA: MIT Press, 1992); and Elliott Sober, “Bayesianism—its Scope and Limits,” in Richard Swinburne (ed.), *Bayes’s Theorem* (Oxford: Oxford University Press, 2002), 21-38. In the remainder of this chapter, we’ll briefly consider three of the most serious challenges.