those days, people didn't succumb to anorexia or bulimia and the incidence of obesity was much lower. Seems the 'experts' have got it wrong again.

This argument seems to conclude not only that eating traditional British cooking protects people from anorexia, bulimia and obesity, but that not eating it causes those things. To expose the fallacies, we can reconstruct the entire argument as an extended argument:

P1) When we ate traditional foods (X) there was a significantly lower incidence of anorexia, bulimia and obesity (lack of Y). P2) Whenever two phenomena (X) and (Y) are correlated, X is the cause of Y.

C1) Eating traditional foods (X) causes a low incidence of anorexia, etc. (lack of Y).

P3) Now we don't eat those foods (lack of X) and the incidence of anorexia, etc. is much higher (Y). P4) Whenever X causes a lack of Y, a lack of X causes Y.

C2) Not eating traditional foods (lack of X) causes a higher incidence of anorexia, bulimia and obesity (Y).

The fallacy of mistaking correlation for cause is signalled by P2. The mistaken inversion is signalled by P4, which is false. Some causal relationships can be inverted, but it is certainly not true of causal relationships in general that they can be. For example, drinking milk causes us not to be thirsty. But it is not true that not drinking milk causes us to be thirsty!

The next two fallacies we consider are committed when we make unwarranted inferences from what is known, believed or proven.

Appeal to ignorance

This is the fallacy of concluding either that because a claim has not been proven it must be false (the negative form), or that because it has not been disproved it must be true (the positive form). It is often used when defending a belief in something that remains unproved such as astrology or the existence of a deity. The following commits the negative form of the fallacy:

As no one's proved that UFOs exist, it's reasonable to assume that they don't.

whereas this equally fallacious argument commits the positive form:

No one's managed to prove that UFOs don't exist, so we can reasonably conclude that they do.

As we see when we reconstruct the arguments, each fallacious and unsound argument is driven either by the false assumption that absence of proof means a proposition is false or that absence of disproof means a proposition is true:

P1) No one has proved that UFOs exist. P2) All unproven propositions are false.

C) It is reasonable to conclude that UFOs do not exist.

P1) No one has proved that UFOs don't exist.

P2) All propositions that have not been disproved are true.

C) It is reasonable to conclude that UFOs do exist.

Of course, in cases where efforts to prove something have been sufficiently strenuous, it may be reasonable to infer the falsity of the proposition. For example, repeated efforts have been made, using sophisticated scientific equipment, to find the Loch Ness monster (to prove the proposition that Nessie exists); but to no avail. It is reasonable on that basis to conclude, alas, that Nessie doesn't exist. But that is because we know that if Nessie did exist, then she probably would have been detected by those efforts. The mere fact that a proposition hasn't been proven, just by itself, is no reason to think it false. Likewise, the mere fact that a proposition hasn't been disproven, just by itself, is no reason to think it true.

'Proof' connotes certainty, and part of what is going on with this fallacy is that people sometimes think that if a claim is not certain, then it can reasonably be denied. But that is not how things are, as should be reasonably clear from Chapter 3. Where we know we have an inductively very forceful argument with true premises, then despite not having perfect certainty, it would be unreasonable to deny the conclusion (with certain exceptions, as explained in Chapter 6). Some claims and theories provide the most probable explanations of the phenomena they concern even though they remain neither proved nor disproved. The theory of natural selection is one such example. The arguments in favour of it have a great deal of inductive force, but as yet no one has managed to prove it.4 Reasons why it should be considered the most plausible explanation of the evolution of species ought to be incorporated into relevant arguments; whether or not it is proved is not essential to the question of whether we ought to believe it.

Epistemic fallacy

This fallacy (from the Greek episteme, meaning knowledge) arises because of the tricky nature of knowledge and belief, and the difficulty of discerning from the third-personal point of view what someone believes or knows. It is committed when we make a fallacious inference from the fact that if someone believes that P then they must also believe that Q on the grounds that P and Q are about the same thing or person even though the way in which they refer to that thing or person is different. The following provides a simple instance of the epistemic fallacy:

Chris believes that Tony Blair enjoys sky-diving. Tony Blair is the Prime Minister, so Chris believes that the Prime Minister likes sky-diving.

A reconstruction gives us:

P1) Chris believes that Tony Blair enjoys sky-diving. P2) Tony Blair is the Prime Minister.

C) Chris believes that the Prime Minister enjoys sky-diving.

The inference is unwarranted and the argument invalid because the arguer has assumed that in addition to having beliefs about Tony Blair's preferred leisure pursuits, Chris also knows that Tony Blair is the Prime Minister. But the arguer has no grounds for this assumption. Chris may only have a belief about Tony Blair but not know that Tony Blair is the Prime

4 We should note that although a theory may be in principle provable it may remain neither proven nor unproven indefinitely because there is insufficient evidence to prove/disprove it conclusively.

Minister. If so then C might be false. Another way of putting this is to say that Chris may not know that 'the Prime Minister' and 'Tony Blair' refer to the same person. Thus, if Chris is indeed ignorant of Tony Blair's being Prime Minister, the following argument, though valid, would be unsound due to the falsity of P3 and of C:

P1) Chris believes that Tony Blair enjoys sky-diving.

P2) Tony Blair is the Prime Minister.

P3) Chris knows that Tony Blair is the Prime Minister.

C) Chris believes that the Prime Minister enjoys sky-diving.

It is important to note that similar inferences made in different contexts are warranted and the arguments containing them valid. Consider the following:

P1) The Prime Minister is a world champion darts player. P2) Tony Blair is Prime Minister.

C) Tony Blair is a world champion darts player.

Inferences such as this are sanctioned by an apparently indubitable logical principle known as Leibniz's Law (after the seventeenth-century German philosopher and mathematician, Gottfried Leibniz). This law holds that if one thing is the same identical thing as another, then what is true of one must be true of the other. For example, if Superman has blond hair and Superman and Clark Kent are the same person, then Clark Kent must have blond hair. Sentences about people's beliefs or knowledge, such as our example about Tony Blair and sky-diving, are exceptions to Leibniz's Law. If Chris believes that X is thus-and-so, even if X and

Y are the same thing, it does not follow that Chris believes Y to be thus-and-so because we do not know whether or not Chris knows that X and

Y are the same thing. The inference would only be warranted if the arguer knew that Chris knew this. How to make sense of these sorts of cases is a famous philosophical puzzle, but we need not let that worry us here.3

5 The most famous discussion is 'On sense and reference', by Gottlob Frege, reprinted in Meaning and Reference, edited by A.W. Moore (Oxford: Oxford University Press, 1993).

The epistemic fallacy is often used knowingly to discredit someone's opinion. For example:

Mr Smith believes that the cultivation and use of cannabis should remain a criminal offence in this country. But cannabis is the most effective anti-nausea drug for chemotherapy patients. So Mr Smith believes that it should remain a criminal offence to produce or use the most effective anti-nausea drug for chemotherapy patients.

An initial reconstruction of the argument gives us the following valid argument:

P1) Mr Smith believes that it should be a criminal offence to produce or to use cannabis. P2) Cannabis is the most effective anti-nausea drug for chemotherapy patients.

C) Mr Smith believes that it should be a criminal offence to produce or to use the most effective anti-nausea drug for chemotherapy patients.

If we add a hidden premise of the same form as P3 in the previous example, we see that we cannot conclude that the argument is sound because unless we have some grounds for saying that P3 (and hence C) are true:

P1) Mr Smith believes that it should be a criminal offence to produce or to use cannabis. P2) Cannabis is the most effective anti-nausea drug for chemotherapy patients. P3) Mr Smith knows that the most effective anti-nausea drug for chemotherapy patients is cannabis.

C) Mr Smith believes that it should be a criminal offence to produce or to use the most effective anti-nausea drug for chemotherapy patients.

The inference drawn here is unwarranted if we don't know that Mr Smith is aware of the anti-nausea properties of cannabis. It is possible that Smith is unaware that cannabis is the best anti-nausea drug for chemotherapy patients. Indeed, it might well be that Smith believes that the best remedy for nausea ought to be made available to chemotherapy patients and so long as he is genuinely ignorant that cannabis is the best such remedy, he would not be inconsistent in making this claim. The danger of epistemic fallacies, then, is that they may attribute beliefs to persons that they do not really hold.

Notice that these examples turn on verbs such as 'knows', 'believes', 'wants'. Philosophers and linguists call verbs such as these propositional attitude verbs. If we reflect upon how these verbs are used, we see that we say someone believes that..., where the blank is filled by the expression of some proposition or other. A propositional attitude, then, expresses the fact that someone holds some attitude towards a specific proposition. Smith believes that such and such is the case; Jones wants such and such to happen; Brown knows that such and such is the case. Other examples are 'desires', 'hopes', 'prays', 'wishes'.

Further fallacies

We said early in our discussion of fallacies that almost all are either formal or substantive fallacies. All such fallacies make for unsound arguments; they are either irremediably invalid, or depend on some very general but false implicit assumption. We turn now, however, to a different group of fallacies. These are labelled as fallacies, but not every instance of them will be invalid, or inductively unforceful, or even unsound. However, they are all poor techniques at argument; they should be criticised when we analyse arguments, and avoided in our own attempts to persuade by argument. Many of them, however, are useful for non-rational persuasion: they are frequently used to avoid engagement with an opponent, or to trump an opponent in the knowledge that their premises do not actually give good reason to accept their conclusions. In many cases they do have persuasive power.

Although reconstruction will be helpful in analysing instances of these fallacies, they cannot be exposed by making explicit a false assumption that drives all instances of the fallacy in question. This is because there is no single false assumption (expressed as a generalisation or a conditional) that underlies all instances of each of these fallacies. So while we should, in order to expose their fallacious reasoning, continue the practice of reconstructing arguments that we suspect of committing these fallacies, it is not so easy to give a straightforward method for detecting these fallacies.


The rhetorical ploy of trading on an equivocation is the ploy whereby we deliberately use a word or form of words with the intention to confuse the audience; one hopes that the audience will conflate the two or more possible interpretations. A single unsupported claim, rather than an argument, may be the instrument of the ploy. To fall prey to the fallacy of equivocation, by contrast, is to fail to notice an ambiguity, thereby accepting the conclusion of an argument, when one should not have. Silly but clear examples are easy to come by; for example: 'In the philosophy department, someone broke one of the chair's legs; therefore one of the philosophy department's professors has a broken leg' (equivocation on the word 'chair'). Such a case is simple and amusing but no one would actually be taken in by it. In the more interesting cases, explaining the fallacy can be a subtle conceptual task. For example:

Some conservatives claim moral universal truths; they claim that throughout history, in all times and places, people fundamentally have the same rights. This displays a lamentable ignorance of history, and - characteristically of conservatives -of other cultures. It is a plain fact that at other times in history, and in other parts of the world today, human beings do not have the same rights. In some countries, for example, a man has the right forcibly to confine his wife to the home if he sees fit; not so in our culture. The conservative claim of universal rights is plainly false.

The arguer wishes to conclude that, contrary to certain conservatives who believe in universal moral truths, whether or not a human being possesses a given right depends on what culture they are in. Thus ignoring some irrelevant material, we may reconstruct it as a very simple argument:

P1) In some countries, men have the right to confine their wives forcibly; in other countries they do not.

C1) It is not the case that human beings have the same rights in all places and at all times.

C2) The conservative claim - that throughout history, in all times and places, people fundamentally have the same rights - is false.

The argument equivocates on the word 'right', however. Both senses are established items in our language, but they are close together in meaning. In one sense of the word, to possess a 'right' is to be allowed, by the culture or other social environment one is in (often, but not always, this is a system of laws), to perform a certain action. Call this the 'conventional' sense of the word. In the other sense, to possess a 'right' is to be such that one ought, whatever culture or other social environment one is in, to be allowed to perform a certain action - even if one is not in fact allowed to. Call this the 'philosophical' sense of the word. Thus one may possess rights in the philosophical sense that are not rights in the conventional sense. The trouble with the argument is that it uses both senses: if we keep to the conventional sense of the word, PI is true, CI is true, and the inference from PI to CI is valid. The inference to C2 would be valid if the conservative claim were intended in the conventional sense. But the conservative claim, no doubt, was that rights are invariant in the philosophical sense of the word. In that case C2 cannot be inferred from CI; the inference would be no better than the silly one about the broken chair.

Red herring

So named after the practice of dragging a smelly, salt-cured (and therefore reddish) herring across the trail of an animal tracked by dogs. The red herring fallacy is used as a technique to throw someone off the scent of one's argument by distracting them with an irrelevance. The rhetorical ploy of the smokescreen constitutes a similar tactic. However, where an irrelevant premise(s) is given as a reason for accepting the conclusion being advanced, the red herring fallacy is committed. For example:

The judge should rule against the plaintiff's charge of sexual harassment against the president. The President is very popular, and presides over an extremely healthy economy.

The arguer seems to advance the President's political success as a reason to rule against the charge of sexual harassment. If we make the reasonable assumption that the judge should rule strictly on the basis of the president's guilt or innocence, then the President's political success is utterly irrelevant. Reconstructed, the argument looks like this:

P1) The President is very popular, and presides over an extremely healthy economy.

P2) If the President is very popular, and presides over an extremely healthy economy, then the judge should rule against the plaintiff's charge of sexual harassment.

C) The court should rule against the plaintiff's charge of sexual harassment.

In general, the red herring fallacy is that of inferring a conclusion from a premise that is strictly irrelevant to it, but in a way that has the potential to fool the audience into accepting the inference. Normally this is accomplished by a premise that tends to instil some sort of positive attitude towards the conclusion. In this case, the premise is intended to make the audience feel supportive towards the President, thus unrecep-tive to the idea that he should be convicted of misconduct.

Note that, although red herring arguments can easily be represented as valid, red herring is not a substantive fallacy. P2 is obviously false, but our ability to recognise this depends on our knowledge of what is, and what is not, relevant to the establishment of guilt in a court of law. More generally, what is and what is not relevant to a conclusion will depend on the conclusion's particular subject-matter. So there is not going to be one characteristic premise that red herring fallacies assume, in the way that there is, for example, in the case of inverting cause and effect. So red herring, according to our categories, is not a substantive fallacy.

It is worth re-emphasising, finally, that to say that someone has been taken in by a red herring fallacy is to say that they have been fooled. Unlike most other fallacies, the ability to recognise red herring varies depending on our knowledge of the subject-matter of the argument. But if X honestly believes, for example, that cancer is always caused by thinking morally bad thoughts, then, although having developed cancer is irrelevant to the question of the moral character of their thoughts, X does not commit red herring if X infers, from the fact Y has cancer, that Y must have been thinking bad thoughts. X is just badly informed. The point of distinguishing red herring as a fallacy of irrelevance is to single out the cases where one is fooled by an irrelevance, where one ought to have known better. Every minimally educated person knows, for example that guilt or innocence in a court of law is properly established only by the preponderance of evidence; because of this, one who advances or accepts the argument given above has been fooled by an irrelevance, and has thus committed red herring.

Slippery slope

This fallacy occurs when an arguer wrongly assumes that to permit or forbid a course of action will inevitably lead to the occurrence of further related and undesirable events, without providing good reasons to suppose that the further events will indeed inevitably follow; and thus to allow the first is to tread on a slippery slope down which we will slide to the other events. Since its rhetorical power is derived from fear or dislike of the undesirable events, it is from a rhetorical point of view closely related to the appeal to fear. Slippery slope arguments are sometimes used to justify particularly harsh laws or penal sentences and occur frequently in debates about the liberalisation or toughening of laws or constraints on behaviour, as in the following example about the decriminalisation of cannabis use:

The decriminalisation of cannabis would be just the start. It would lead to a downward spiral into widespread abuse of harder drugs like heroin and cocaine.

The implicit conclusion is that cannabis should not be decriminalised. The only explicit premise is that if cannabis were decriminalised then the use of hard drugs would increase. So an initial reconstruction represents the argument as invalid:

P1) If cannabis were to be decriminalised, the use of hard drugs would increase.

C) Cannabis should not be decriminalised.

Notice that as it stands the argument also commits the fallacy of deriving ought from is. To correct this, we need to add a premise to make good the connection between the non-prescriptive premise and the prescriptive conclusion, thus ending up with the following argument:

P1) If cannabis use were decriminalised, the use of hard drugs would increase.

P2) Anything that leads to increased use of harder drugs should be avoided.

C) Cannabis should not be decriminalised.

The immediate problem is that we have not been given a reason to think that PI is true; that is, no reason to think that decriminalisation of cannabis will unavoidably be the beginning of a slippery slope to an increase in the use of hard drugs. Of course, some slopes really are slippery; even in this case, it might be possible to give such reasons and they might form part of an extended argument for the same conclusion. But as it stands the argument remains fallacious, because the arguer has not given a reason for supposing that it is inevitable that allowing the first event will precipitate a slide into even worse events. (This form of argument is sometimes called floodgates - the arguer alleges without evidence that allowing X will inevitably open the floodgates to Y and Z.)

Straw man

This is the fallacy that occurs when an arguer ignores their opponent's real position on an issue and sets up a weaker version of that position by misrepresentation, exaggeration, distortion or simplification. This makes it easier to defeat; thereby creating the impression that the real argument has been refuted. The straw man argument, like the straw man himself, is easier to knock down than the real thing. Suppose that Jones is an advocate of the legalisation of voluntary euthanasia; that is Jones believes that terminally ill patients should have the legal right to choose to have their life ended if their suffering has greatly diminished their quality of life, and doctors agree that the patient's mental state is sufficiently sound to make the decision rationally. Smith, Jones' opponent, responds as follows:

How can you support giving doctors the right to end a person's life just because they decide that the person's life is no longer worth living; no one should have that power over another person's life, and doctors should not kill patients.

According to Smith, Jones advocates that doctors should unilaterally have the power to end a patient's life, if they think that the patient's life is not worth living. That would be a very controversial position. But it is not Jones' position. Jones' position is that patients should have the choice of euthanasia, so long as that choice is approved by doctors. Of course the doctor administers the lethal drug, but only at the behest of the patient, as Jones envisages things. Smith thus fails to engage with Jones' actual position and instead misrepresents it as a more extreme and therefore weaker position that (as far as we know) Jones does not advocate.

Rhetorical ploys and fallacies

Begging the question

An argument commits the fallacy of begging the question when the truth of its conclusion is assumed by one or more of its premises, and the truth of those premise depends for its justification on the truth of the conclusion. Thus the premises ask the audience to grant the conclusion even before the argument is given. Contrary to the way in which the phrase is sometimes used in ordinary discourse, 'begging the question' does not mean raising the question without offering an argument. By way of example, imagine the following scenario:

10 Ways To Fight Off Cancer

10 Ways To Fight Off Cancer

Learning About 10 Ways Fight Off Cancer Can Have Amazing Benefits For Your Life The Best Tips On How To Keep This Killer At Bay Discovering that you or a loved one has cancer can be utterly terrifying. All the same, once you comprehend the causes of cancer and learn how to reverse those causes, you or your loved one may have more than a fighting chance of beating out cancer.

Get My Free Ebook

Post a comment