Epistemic Rationality

Philosophers distinguish two main types of rationality, or reasonableness:

  1. Epistemic rationality concerns what it’s reasonable to believe. A belief is epistemically rational for you when it’s a reasonable thing for you to believe, given your other beliefs and the evidence you have.
  2. Practical rationality (also called pragmatic rationality) concerns what it’s rational to do. A choice or action is practically rational for you when it’s a reasonable thing for you to do, given your beliefs, desires, and values.

In the first section of this chapter, we’ll consider two important aspects of epistemic rationality: probabilistic coherence and fit with evidence. The second section will introduce several Bayesian ways of representing the concept of evidence with probabilistic models. Finally, the last section will introduce a leading Bayesian model of practical rationality.

Logical Consistency and Probabilistic Coherence

What does it mean to be rational, or reasonable, in what you believe? This is a complex, multi-faceted issue. Enumerating a complete set of necessary and sufficient conditions for epistemic rationality may be an impossible task, but we can at least identify and clarify some important features of epistemic rationality.

One important aspect of epistemic rationality involves the logical relations between the various beliefs we hold. Recall that two or more propositions are logically consistent if and only if it is logically possible for them all to be true at the same time; otherwise, they are inconsistent. If you hold a set of beliefs that are logically inconsistent, your beliefs contradict each other. Ordinarily, it is unreasonable to hold contradictory beliefs, so logical consistency might appear to be a necessary condition for epistemic rationality.

Paradoxes arise, however, if we insist that any reasonable set of beliefs must be logically consistent. One such paradox is called the preface paradox. Suppose I have written a lengthy book in which I make many claims. I believe each of these claims to be true, yet I also recognize that I am fallible and occasionally make errors. Given the large number of claims I have made, in fact, it seems practically certain that I am wrong about at least one of them. So, in the spirit of intellectual humility, I preface my book with a disclaimer:

Although I believe each statement I have made in this book, I also believe I am wrong about at least one of my claims, though I don’t know which one.

This humble admission seems like a perfectly reasonable thing to believe. Surely, it would be unreasonable to think I’m right about absolutely everything! Notice, however, that my beliefs are logically inconsistent: I believe each and every statement I have written, yet I also believe that at least one of those statements (I don’t know which one) is false. It is not logically possible for all of my beliefs to be true at the same time, yet it doesn’t seem unreasonable to hold inconsistent beliefs in this case.

In fact, the same paradox arises when you reflect on your own beliefs in any case, whether or not you’ve written them down in a book. Surely, it would be the height of arrogance and folly to think that the entire set of all your beliefs contains not a single falsehood! Believing that at least one of your own beliefs is false (without knowing which one) seems not only reasonable, it seems rationally obligatory: you should believe that at least one of your own beliefs is false. Yet, when this last belief is added to your set of beliefs, it makes the set logically inconsistent.

A closely related problem is the lottery paradox. Whenever I see someone buying a lottery ticket, I believe the ticket they are purchasing will not be the winning ticket. In fact, I believe the same thing about each individual lottery ticket: it is a loser, not a winner. This seems like the most reasonable thing to believe about each ticket, given the tremendously large number of tickets and the extremely low probability of winning. However, I also believe that one ticket is the winning ticket. (Unfortunately, I have no idea which one.) That belief, too, seems reasonable. Again, my beliefs are logically inconsistent, yet it doesn’t seem irrational to hold inconsistent beliefs in this case.

Reflecting carefully on the lottery paradox can give us a hint about what has gone wrong in all of these cases. When I say I “believe” that some specific, individual lottery ticket is a loser, I don’t mean I’m absolutely certain it will lose. I mean that I have a high credence (degree of belief) that it will lose. I acknowledge it has a minuscule chance of winning, but I believe it won’t win because its chance of losing is much greater than its chance of winning. Similarly, in the preface paradox, I’m not 100% certain of each claim I made in the book. When we recognize that belief is usually a matter of degree, the source of these paradoxes becomes obvious.

To resolve the paradoxes, we can substitute probabilistic coherence in place of logical consistency as a requirement for epistemic rationality. As you may recall from the previous chapter, a set of credences is probabilistically coherent if and only if it follows the mathematical rules of probability. Probabilistic coherence prevents us from believing contradictory claims while still allowing for the kind of reasonable inconsistency that occurs in the preface paradox and lottery paradox.

For example, if you are probabilistically coherent in your beliefs about lottery tickets, you won’t believe that your lottery ticket will win and at the same time believe that it won’t win. The proposition W and ~W are mutually exclusive and exhaustive, so—according to the rules of probability—your credences in those two propositions must add up to 1: the higher your credence in W, the lower your credence in ~W, and vice versa. Thus, the rules of probability don’t permit you to believe two contradictory claims. If you think your ticket probably won’t win, you can’t also believe that it will. That sort of logical inconsistency is avoided.

However, the rules of probability do permit logically inconsistent sets of beliefs in cases like the lottery paradox and the preface paradox. In fact, probabilistic coherence requires us to hold exactly the kinds of logically inconsistent beliefs that seem intuitively reasonable in those cases. To see why, let’s assume for the sake of simplicity that the propositions in question are probabilistically independent. Two or more propositions are probabilistically independent if and only if the assumption that any one of them is true doesn’t affect the probabilities of the others. When propositions are probabilistically independent, the probability that all of them are true is just the product of their individual probabilities, as illustrated in the following example:

Suppose two propositions X and Y are probabilistically independent, which means that the conditional probability of one given the other is the same as its unconditional probability: pr(X|Y) = pr(X), and similarly pr(Y|X) = pr(Y). Since X and Y are probabilistically independent, the probability of their conjunction is equal to the product of their individual probabilities. For instance, if pr(X) = ½ and pr(Y) = ⅓, then the probability that X and Y are both true is:

pr(X • Y) = pr(X) × pr(Y) = ½ × ⅓ = ⅙

Now, let’s apply the same reasoning to the preface paradox:

Suppose I believe each claim in my book with 99% confidence. That is, I have a credence of .99 in the truth of each sentence I’ve written. For the sake of simplicity, let’s also stipulate that these sentences are probabilistically independent, so the probability that all of them are true is just the product of their individual probabilities. Thus, if my book contains only two sentences, the probability of both being true is:

.99 × .99 = .992 = .9801

If the book contains three sentences, my credence that all three are true is .993 = 0.970299, and so on. With each additional sentence, the probability that they’re all true decreases exponentially. If the book contains 1,000 sentences, my credence that it contains no falsehoods should be .991000 ≈ 0.00004, which is less than half of a thousandth of a percent! In other words, I should be approximately 99.996% sure that at least one of the sentences is false, even though I’m 99% confident in the truth of each individual sentence.

Similar considerations explain why it’s reasonable to believe of each individual lottery ticket that it’s a loser while simultaneously believing there is a winner, and why it’s reasonable to believe that some of your own beliefs are false. Probabilistic coherence is a better guide to epistemic rationality than logical consistency is.

On the other hand, probabilistic coherence may not be a strictly necessary condition for epistemic rationality either. There are some limitations to probabilistic models of rationality, owing to the fact that our credences are often imprecise and we don’t have unlimited intellectual capabilities. For example, probabilistic coherence requires our credence in any tautology to be exactly 1, but we’re not always capable of recognizing a tautology when we see one. Attempts to refine these probabilistic models, and finding workarounds to compensate for our own intellectual deficiencies, are active areas of research in contemporary epistemology.