Bayesian Confirmation Theory

Bayesian confirmation theory, also called Bayesianism, is named in honor of the Reverend Thomas Bayes (1701 - 1761), an English mathematician and Presbyterian minister who proved an important theorem of probability on which the theory relies. Bayesian confirmation theory has gained widespread popularity among contemporary philosophers and scientists. Though not universally accepted, it is arguably the most successful theory of confirmation to date.

Bayesianism presupposes that rational (reasonable) degrees of belief should conform to the five mathematical rules of probability listed on the previous page. Sets of beliefs that follow those rules are said to be probabilistically coherent. In other words, in order to be probabilistically coherent:

  1. Your credence in a logical contradiction should be 0.
  2. Your credence in a tautology should be 1.
  3. Your credence in a disjunction should not be less than your credence in either disjunct.
  4. Your credence in a conjunction should not be greater than your credence in either conjunct.
  5. Your credences in any set of mutually exclusive and exhaustive propositions should add up to 1.

Moreover, according to Bayesianism, there are also probabilistic rules for how one’s beliefs can rationally change over time. To represent changing beliefs, we’ll use subscripts to indicate different times: pr1(H) represents your credence in hypothesis H at time 1, just prior to discovering some new evidence E; and pr2(H) represents your credence in H at time 2, just after learning E. These two probabilities are called your prior credence and your posterior credence in H, respectively, relative to evidence E.

Conditionalization

Bayesianism endorses the following rule, called the conditionalization rule (or the simple principle of conditionalization), which specifies how your credence in H should change when you learn E. The conditionalization rule says that upon learning E, your unconditional credence in H should be updated to match your prior conditional credence in H given E:

pr2(H) = pr1(H|E)

The right side of that equation represents your conditional credence in H at time 1, assuming the truth of some possible evidence E which you had not yet discovered at that time. The left side represents your credence in H at time 2, after learning that E is in fact true.

For example, suppose you know that a playing card has been randomly chosen from a standard deck of 52 cards, but at time 1 you have no other information about it. Consider the hypothesis that the selected card is the queen of hearts. Initially, your credence in that hypothesis is pr1(Q) = 1/52, and your conditional credence assuming that it’s a face card is pr1(Q|F) = 1/12. (At this initial time, you have not yet learned whether it is a face card. Your conditional credence reflects how likely you think it is to be the queen of hearts assuming that it turns out to be a face card.)

Then, at time 2, you gain some new evidence: you learn that the card is, in fact, a face card. In light of this new evidence, you should change your degree of belief in hypothesis Q. Now that you have learned F, your actual (unconditional) credence that the card is the queen of hearts should increase from 1/52 to 1/12. So, your posterior credence is pr2(Q) = 1/12, which is the same as your prior conditional credence pr1(Q|F).

The process of updating one’s beliefs in this way is called conditionalization. So, Bayesianism says that you should change your beliefs over time by conditionalizing on any new evidence you acquire. When you conditionalize on new evidence, your credence in any hypothesis may increase, decrease, or stay the same, depending on your prior conditional credences. If conditionalization increases your credence, the evidence is said to confirm the hypothesis. (The term ‘confirm’ is used even if the resulting credence is still quite low. ‘Confirm’ is not a synonym of ‘prove.’) Conversely, if conditionalization decreases your credence in H, the evidence is said to disconfirm H.