Though it is clearly superior to previous theories of confirmation, Bayesian confirmation theory has some shortcomings as well. These problems don’t necessarily diminish the value of Bayesianism as a helpful tool for philosophical reasoning, but it is important to be aware of the shortcomings in order to avoid applying Bayesian methods in contexts where they don’t really work. In what follows, we’ll consider three of the most significant problems: uncertain evidence, old evidence, and the problem of the priors.

The conditionalization rule, or simple principle of conditionalization, is a little *too* simple. It assumes that “learning” E means becoming 100% certain that E is true. That rarely, if ever, happens: we don’t become absolutely, positively certain of the things we learn. Moreover, various technical issues with Bayesianism arise when one’s credence in a contingent proposition is exactly 1. In particular, if your credence in some contingent proposition E is 1, the conditionalization rule and the rules of probability together imply that you can never change your mind about E: your credence in E must remain immutably stuck at 1 forevermore!

These problems can be solved by replacing the conditionalization rule with a more general rule describing what to do when your credence in E changes without going all the way to 1. The generalized rule, formulated by Richard Jeffrey, is known as Jeffrey conditionalization. It says that when your credence in evidence E changes in any way, your unconditional credence in hypothesis H should be updated as follows:

pr_{2}(H) = pr_{1}(H|E) × pr_{2}(E) + pr_{1}(H|~E) × pr_{2}(~E)

The above formulation is actually a special case of Jeffrey conditionalization, which is even more general than I’ve construed it to be. The general form of Jeffrey’s rule says that when an experience or observation directly changes your credences over some partition {E_{i}} from pr_{1}(E_{i}) to pr_{2}(E_{i}), your posterior credence in any hypothesis H should be updated as follows:

pr_{2}(H) = Σ_{i} pr_{1}(H|E_{i})pr_{2}(E_{i}).

Jeffrey conditionalization is more complicated than simple conditionalization, obviously, and some philosophical challenges arise for it as well.As mentioned above, the simple principle of conditionalization implies that you can never change your mind about evidence E after conditionalizing on it; your credence in E must remain immutably stuck at 1. Although Jeffrey conditionalization avoids that problem, it suffers from an analogous problem: after conditionalizing on a shift in your credences, you can’t later change your mind about how much your credences should have shifted in the first place. For further explanation and discussion of this issue, see Jonathan Weisberg (2009), “Commutativity or Holism? A Dilemma for Conditionalizers,” in *The British Journal for the Philosophy of Science*, 60:4. Nevertheless, it has important advantages. Not only does Jeffrey conditionalization solve the problem of uncertain evidence, it also boasts a much broader range of applications than the simpler rule. Whereas simple conditionalization only applies when your credence in E increases to 1, Jeffrey conditionalization tells you what to do when your credence in E increases *or* decreases by *any* amount.

An example may help to illustrate how Jeffrey conditionalization works. Suppose your prior conditional credence in the proposition that it will rain today assuming that it will be sunny all day is 0:

pr_{1}(R|S) = 0

pr_{1}(R|~S) = ½

Now, an experience of listening to the weather forecast either increases or decreases your credence in the proposition S (that it will be sunny all day). Let’s suppose your new credence in S is ¾:

pr_{2}(S) = ¾

pr_{2}(R) = pr_{1}(R|S) × pr_{2}(S) + pr_{1}(R|~S) × pr_{2}(~S)

= 0 × ¾ + ½ × ¼

= 1/8

The weather forecast may be wrong, or you may have misheard it. Nevertheless, Jeffrey conditionalization allows you to make use of your evidence that it will be sunny, even though this evidence is uncertain. Moreover, Jeffrey’s view is compatible with the idea that empirical beliefs should always be amenable to evaluation in light of further evidence. (As mentioned above, the simple conditionalization rule does not admit of this possibility, since evidential propositions are assigned a credence of 1.) Your credence in the evidential proposition S need not remain immutably fixed at ¾. Indeed, a glance out the window may dramatically undermine your confidence in the weather forecast.

Moreover, Jeffrey conditionalization yields the simple conditionalization rule as a special case: when pr_{2}(E) = 1, the two rules are equivalent. Similarly, when pr_{2}(E) is *almost* 1, Jeffrey conditionalization and simple conditionalization yield *almost* identical values for pr_{2}(H). For this reason, the difference between the two conditionalization rules can be safely ignored whenever we become almost certain of some evidential proposition, as often happens when making observations. For example, a scientist may not be 100% sure that she sees a particular result on her measuring instruments (she might be dreaming, or hallucinating) but she can be 99.9% sure, and that is good enough to make the difference between Jeffrey conditionalization and simple conditionalization negligible in typical cases of measurement and observation. So, for practical purposes, we can use simple conditionalization as a convenient shortcut to calculate the posterior probability of a hypothesis.

Scientists often regard a discovery as evidence for a hypothesis even if the discovery was made *before* the hypothesis was formulated. For example, the fact that planetary orbits precess in a certain way is considered strong evidence for Einstein’s general theory of relativity, even though the orbital precession was discovered centuries before Einstein proposed his theory. Since the evidence was already known before the hypothesis was introduced, scientists can’t update their credence in the hypothesis by conditionalizing on the evidence. So, the discovery doesn’t confirm the hypothesis according to the Bayesian definition of confirmation.

One way of addressing this problem is to “rationally reconstruct” the situation: imagine what a rational scientist *would* have done if he or she hadn’t discovered the evidence until *after* the hypothesis was proposed. However, since background knowledge plays an important role in determining the relevant credences, we have to be careful in deciding what background knowledge to include or exclude in the imagined scenario. Not just any rational reconstruction will do, but it’s unclear what the rules of rational reconstruction should be.