Probability

We express the probability that a given proposition is true (or that a given event has occurred or will occur) on a numerical scale between 0 and 1, expressed either as a decimal or as a fraction. For example the probability that a tossed coin will land heads-up is 0.5 or 1/2. Perhaps surprisingly, there are different ways in which to explain probability. We'll briefly consider three: proportion, frequency, and rational expectation.1

First, proportion. Many arguments contain a premise that says something like 'Most X are Y' or '7/8 of Xs are Ys'. Such quantifiers as 'most' and '7/8 of' indicate proportions, and are importantly related to probability. Suppose you want to know the probability that the card you have drawn from an ordinary, complete deck of playing cards is an ace. One way to do this would be to assume that this probability is equal to the proportion of aces in the deck to the total number of cards in the deck.

1 Another common basis for probability-estimates is the use of models. For example, if an engineer wants to estimate the probability that a lorry will tip over in high winds, he or she would consider such things as the exact design of the lorry and the winds it would be likely to encounter in order to deduce a probability using the laws of physics.

Since this figure is 1/13 (there are four aces and fifty-two cards, and 4/52 = 1/13), you assume that the probability that you'll draw an ace is 1/13 (about 0.077).

Now frequency. Suppose you want to know the probability that it is going to snow in December in London, when December is still several months away. One simple way to do this would be to find out how frequently this has actually happened over the past, say, one hundred years. Suppose you find that, out of the past one hundred Decembers, it has snowed during fourteen of them. Then you might infer that the probability that it will snow in London this coming December is 14/100 (0.14).

These strategies are extremely important in the general theory of probability and statistics, but they are not sufficiently general for our purposes. There seem to be cases for which neither proportion nor frequency will serve as a direct indicator of probability. For example, bookmakers sometimes give odds that a given politician will become the next leader of his or her party. Suppose they say that the odds of Mr X becoming the next leader of the Labour Party are 1:1 (i.e. the probability is 1/2). Estimates such as these are often perfectly reasonable. But clearly the bookmaker, in this case, is not basing the probability on the frequency with which Mr X has become leader of the Labour Party in the past; that frequency is zero! Nor is it based upon the proportion of times that the politician has become leader. Probability-estimates of this kind are sometimes quite reasonable, but there is no immediate and simple recipe for basing them upon frequencies or proportions.2 In such cases, we cannot simply convert a proportion or a frequency into a probability.

Because of these complications, we shall take degree of rational expectation as our general concept of probability.3 A person's degree of rational expectation in a given proposition is the degree to which he or she is entitled to believe it, given the evidence he or she has. Besides the fact that this concept of probability is more widely applicable than others, it has the further advantage that it corresponds to the way that the word 'probably' is typically used: when we say, 'That's probably true', we typically mean that our total sum of evidence makes it reasonable to expect the proposition in question to be true.

2 This is not to say that such estimates cannot be based upon proportions or frequencies. In fact, they typically are: it is precisely the task of those whose profession is to estimate probabilities (in the insurance industry, for example) to base the estimates upon relevant proportions, frequencies and other data. More generally, to say that a degree of expectation is rational, it seems, is precisely to say that some such statistical facts are available in terms of which the expectation can be justified. However, such justifications often turn out to be extremely complex, and in many cases we seem to know that probability-estimates are well-founded even when we are unable to explain them adequately. Since degree of rational expectation is arguably the most inclusive or general characterisation of probability, our strategy here is to proceed without requiring that probability-claims always be justified in terms of statistics. The science of such justifications - taught in courses in Statistics and Probability Theory - is extremely interesting and of ever increasing importance.

3 There are various terms in the existing literature for this; another common one is 'epistemic probability'.

This can best be appreciated by thinking again of something like cards. Suppose George has a card face down on the table before him; he doesn't know what it is, but he does know with certainty that it's a red card (perhaps he caught enough of a glimpse to see that it's red, but couldn't tell whether it's a heart or a diamond). Since George knows that clubs are black, he can be perfectly certain - rationally completely certain -that the card is not a club. So his degree of rational expectation that the card is not a club is 1 (equivalently: his degree of rational expectation that the card is a club is 0). Provided he knows that hearts and diamonds are the only red suits, his degree of rational expectation that the card is a heart is 1/2.

Notice two things about this characterisation of probability in terms of rational expectation. First, it is clear that the basis for assigning degrees of rational expectation may consist in proportions. It is because George knows the relevant proportions that we can assign degrees of rational expectation in this case. In other cases, frequencies provide the basis for assigning degrees of rational expectation. In still other cases, we may assign degrees of rational expectation without knowing either relevant proportions or relevant frequencies.

Second, notice that when assigning degrees of rational expectation, we spoke of the degree to which one is entitled to believe something given such-and-such evidence. Since we are taking degree of rational expectation as our concept of probability, what this means can be expressed by saying that our key concept is the concept of conditional probability. That is to say, what we are interested in is the probability that a proposition is true, given that, or on the assumption that, some given set of propositions is true. More exactly, this is the degree to which it would be reasonable to accept a certain proposition, given no other relevant information except that contained within a certain set of propositions. In George's case, his evidence is that the card is red. So the conditional probability of the proposition 'the card is a heart', relative to the proposition 'the card is red', is 1/2.

We can now say more precisely what an inductively forceful argument is:

Let [P] stand for one or more premises, and let A stand for a conclusion. Suppose we have an argument:

To say that such an argument is inductively forceful is to say that the conditional probability of A relative to the set [P] is greater than one-half, but less than 1.

This is our 'official' explanation, but you may find it more helpful to think of inductive force along the lines of the following:

To say that an argument is inductively forceful is to say: The argument is not deductively valid, but: if the premises are true (or were true), then, given no other information about the subject-matter of the argument, it would be more reasonable to expect the conclusion to be true than it would to expect it to be false.

There are several further points to bear in mind as regards probability and inductive force.

1 It is true that we do not always express probabilities as conditional probabilities. For example, if we simply pick a card at random from the deck, it seems we can say outright that the chance of its being an ace is 1/13. So it seems we do ordinarily attribute probability to a single proposition without stopping to specify any further information. Usually, however, this is only because the relevant further information upon which the probability-claim is based is left implicit; it goes without saying. In this case, the relevant information that one would normally take for granted is that the deck is standard and complete, in which case four of its fifty-two cards are aces. Thus, given the information that the deck is standard and complete, it would be reasonable for you to conclude that the card is probably not an ace. But so long as it is kept in mind that there must be some relevant set of information or premises in the picture, it is perfectly harmless to attribute probability to single propositions, and we will sometimes do so.

2 Probability, of the kind we are speaking of, is not an alternative to truth or falsity, in the way that finishing in the middle of the league table is an alternative to finishing at the top or at the bottom. Nor is probability a kind of truth. To say that a proposition is probable, in this sense, is to say that probably it is true. It is to express an expectation that a proposition is true that falls short of perfect certainty. What one says is correct if the evidence upon which the claim is based really does make the expectation rational. The idea is not that there is some third thing between truth and falsity, namely probability. For example, if I know that Reggie has taken his driving test today, and I say 'Probably, Reggie failed his driving test', I am not saying 'It isn't true that Reggie failed his driving test, and it isn't false; it's probable that he failed it'.

No: either it is true that Reggie failed his driving test or it is false. That is, there are two possible states of affairs: either he failed the test or he did not. There is no mysterious third state of affairs, i.e. that he probably failed it. Rather, the function of the word 'probably' is to indicate that the evidence makes it rational to believe, but does not make it certain, that the proposition is true.

3 Unlike truth, probability is a matter of degree; different propositions may have various degrees of probability (relative to a given body of information). For example, given the information we have, it is very highly probable that in the year 863 BC, someone ate a rabbit. But there is a very tiny possibility that no one did. It is somewhat less probable, but still probable, that someone will swim the English Channel during the year 2046. It is possible, but improbable, that someone swam the English Channel in the year 863 BC. We do sometimes specify probabilities in terms of numerical values, but often the probability of something being the case cannot be estimated with enough precision to justify assigning an exact numerical value. Sometimes the most we can do is to rank probabilities; for example we can say with confidence that a certain man is more likely to eat a cucumber than he is to visit the moon during the coming year, but we cannot assign precise numerical probabilities to these propositions, in the way that we can with the cards. And sometimes we cannot even rank them with confidence. Of single propositions, the most we can say in such cases is, 'that's probable', or 'that's very probable', or 'that's improbable'. (By 'probable', remember, we mean the case in which the probability is greater than 1/2; by 'improbable' we mean the case in which it is less than 1/2; something can be neither probable nor improbable, if its probability is exactly 1/2).

Because of this, inductive force, unlike deductive validity, is also a matter of degree. We cannot say that one argument is more valid than another, but we can say that one argument is more inductively forceful than another. Validity was defined in terms of impossibility, in which case the probability is zero (look at the definition of validity again). Validity is thus all-or-nothing. Indeed, one could define validity simply as the 'limiting case' of inductive force - the case in which the conditional probability of the conclusion relative to the premises is 1. Thus we can think of arguments as being arranged on a scale of conditional probability, ranging from deductively valid to inductively invalid (see Figure 3.1).

Because we defined inductive force the way we did, an argument may be inductively forceful but only to a very small degree. For example, if you know that fifteen of the twenty-nine children in the class are wearing white shoes, then you have an inductively forceful argument for the conclusion that the child who got the highest mark in spelling is wearing

Logic: inductive force

Valid arguments

Valid arguments

Inductively forceful arguments

Inductively forceful arguments

Arguments that are neither valid nor inductively forceful

0 0

Post a comment