Showing posts with label critical thinking. Show all posts
Showing posts with label critical thinking. Show all posts

Wednesday, May 13, 2020

Base-Rate Neglect in the News

https://www.nytimes.com/2020/05/13/opinion/antibody-test-accuracy.html


It has been a while since I have thought about the fallacy of base-rate neglect. I did not even think about it when I was recently talking to someone about the reliability (or lack thereof) of tests for SARS-CoV2 antibodies. The piece by Todd Haugh and Suneal Bedi published in the New York Times today (linked above) is a useful reminder.

But it seems to me that Haugh and Bedi do not state their example clearly enough (perhaps because of editorial pruning). I would state it this way: Suppose that a test for SARS-CoV2 antibodies has a sensitivity of 90%. This means that it gives a positive result to 90% of subjects taking it who actually have the antibodies. Suppose also that it has a specificity of 90%. This means that it gives a negative result to 90% of subjects taking it who don't have antibodies. Suppose also that the incidence of antibodies to the virus in the population is (as the writers estimate it to be) 5 percent. And suppose, finally, that 2,000 randomly selected people take the test. If our sample is perfectly representative, 100 of the people taking the test will have the antibodies; and of these, 90 will get a positive result. Of the other 1,900 people taking the test, the ones who don’t have the antibodies, 1710 will get a negative result. But this means—and this is the most significant part—the remaining 190 will get a (false) positive result.

This is significant because it means that, of the 280 people in total who get a positive test result, more than two thirds will not have antibodies. In more general terms, the lower the base rate of what you are testing for, the higher the ratio of false positives to true positives, and therefore the less reliable a positive test result is. If the base rate is 10% then out of our representative sample of 2,000 tests there will be 180 true positives and 180 false positives. So, assuming 90% specificity and sensitivity of a test (which, as I gather, is far better than what any test now available can offer), the base rate has to be above 10% for a positive test result to be more reliable than a coin toss. (A coin toss as to whether a given positive result is true or false, that is, not a coin toss as to whether a given person has antibodies.)

These figures, of course, only describe a mathematical model. One assumption of the model is that persons taking the test are representative of the whole population with regard to the incidence of antibodies among them. This is not necessarily the case. In fact, it is not even probable; rather, people who have had symptoms that they attribute to COVID-19 are more likely to take the test than people who have not. Consequently, the incidence of antibodies to the virus will be higher among persons taking the test than the base rate of the total population. How much higher? I don’t know, and I don’t know how one would go about estimating such a thing.

Friday, August 17, 2018

Quantum Subterfuge

An account of the double-slit experiment by a former professor of mathematical physics is supposed to show the necessity of a paradoxical conclusion, but under examination it shows only the logical confusions of its author.


A few years ago, wanting to gain some knowledge of quantum mechanics, I started reading Quantum Theory: A Very Short Introduction by John Polkinghorne, formerly Professor of Mathematical Physics in the University of Cambridge (Oxford University Press, 2002). I recall being exasperated by the book on my first attempt to read it, but I did not recall the reason until more recently, when I reread Polkinghorne’s commentary on the double-slit experiment.

Polkinghorne opens his exposition by quoting a comment of Richard Feynman on the experiment:
In reality it contains the only mystery. We cannot make the mystery go away by ‘explaining’ how it works. We will just tell you how it works. In telling you how it works we will have told you about the basic peculiarities of all quantum mechanics. [Polkinghorne, p. 22; no citation of source provided]
I recall thinking, when I first read this: “Excellent! Now all I have to do is pay very close attention to the account of the experiment, and I shall understand the basic peculiarities of all quantum mechanics.” This expectation was disappointed.

Describing the physical setup and the results of the experiment is not difficult. Electrons or other quantum particles are fired at a barrier in which there is a pair of adjacent slits. On the far side of the slits is a screen that detects the impacts of the particles. Classical physics predicts that the particles will impinge on the screen in a scattering pattern, with two areas of greatest intensity located directly across from the two slits. But in fact what emerges is an interference pattern, with a single area of greatest intensity located across from the midpoint of the two slits, exactly as if one were sending waves of some sort through the slits.

One might try to explain this on classical-physical lines by supposing that the electrons, though individually particulate in nature, behave in a wave-like fashion when they are shot together in a stream: it is the stream of electrons, not the individual electron, that behaves like a wave. It is true that one cannot observe wave behavior in an individual electron but only in a collection of electrons. But that does not mean that the wave behavior can be explained on classical lines as an effect of physical interaction among electrons in a stream, for the fact is that the electrons form an interference pattern even when they are fired at the slits one at a time. So either one must suppose that the behavior of each electron is influenced by the path taken by its predecessors, or one must attribute wave properties to each electron.

Things get even weirder when the experiment is set up so as to allow the option of detecting which slit the particles pass through. In so-called “quantum eraser” experiments (q.v., Wikipedia), photons passing through circular polarizers form a scattering pattern or an interference pattern according to whether the polarization that distinguishes which slit they went through is preserved or “erased” by a second, diagonal polarizer. And with so-called the “delayed choice” quantum eraser experiment (q.v., Wikipedia again), things get even weirder. But those are other stories, not covered in Polkinghorne’s book. Polkinghorne finds weirdness enough in the original plain double-slit experiment. He writes:
Electrons arriving one by one is particlelike behaviour; the resulting collective interference pattern is wavelike behaviour. But there is something much more interesting than that to be said. We can probe a little deeper into what is going on by asking the question, When an indivisible single electron is traversing the apparatus, through which slit does it pass in order to get to the detector screen? Let us suppose that it went through the top slit, A. If that were the case, the lower slit, B, was really irrelevant and it might just as well have been temporarily closed up. But, with only A open, the electron would not be most likely to arrive at the midpoint of the far screen, but instead it would be most likely to end up at the point opposite A. Since this is not the case [emphasis mine], we conclude that the electron could not have gone through A. Standing the argument on its head, we conclude that the electron could not have gone through B either. What then was happening? That great and good man, Sherlock Holmes, was fond of saying that when you have eliminated the impossible, whatever remains must have been the case, however improbable it may seem to be. Applying this Holmesian principle leads us to the conclusion that the indivisible electron went through both slits (pp. 23–24, latter emphasis in original).
Pay attention to the words “this is not the case” and try to identify their antecedent: what is not the case? The only thing mentioned in the preceding sentences that can in any clear sense be said to be the case or not the case is that the electron in question struck the screen at the point opposite slit A. But it would be insane to say casually that this is not the case, since there is nothing in the preceding stipulations that would justify such a conclusion. No, what Polkinghorne seems to mean by “this is not the case” is that it is not the case that the electron would be “most likely” to arrive at the point opposite slit A.

(Note: The remainder of this post has been extensively revised since I first posted it. The paragraphs that immediately follow analyze and criticize Polkinghorne’s argument in a very detailed fashion. Readers whose patience or interest is tried by such a treatment may profit by skipping down to the paragraph just before the first graph, in which I restate my criticisms by means of an analogy with an argument whose defects are much easier to recognize.)

Let “E” designate a randomly selected electron that is fired at the slits and that strikes the screen on the far side of them. The first part of Polkinghorne’s reasoning can then be summed up as follows:
  1. If E passes through slit A, then E is most likely to strike the screen at a point opposite A.
  2. E is not most likely to strike the screen at a point opposite A.
  3. Therefore,
  4. E does not pass through A.
The argument appears to be impeccable as far as its logical form is concerned. If that is so then the only question to be raised about its cogency is whether both premises are true. But in fact, reflection on the premises reveals an ambiguity that defeats the argument.

Consider premise 2 first. Given the setup of the experiment, the only evidence that we have for attributing to a randomly selected electron a probability of hitting one or another part of the screen is the interference pattern that emerges on the screen.  That pattern shows the highest incidence of impacts at the midpoint between the two slits. From this fact we can conclude that a randomly selected electron that strikes the screen is most likely to do so at the midpoint and not opposite either slit. This reasoning justifies Polkinghorne’s second premise.

Now consider the first premise. E was defined as an electron randomly selected from among all the electrons that reach the screen. But premise 1 concerns an electron that is randomly selected from among those that have passed through slit A. The pattern on the screen provides no evidence whatever relevant to a conclusion about the most likely point of arrival of such an electron as that. The only way to get evidence relevant to a conclusion about a randomly selected electron that has passed through slit A is either to block off slit B or to use a device that distinguishes the impacts of electrons that have passed through A from the impacts of electrons that have passed through B, as in the quantum eraser experiments. It is established that if we do either of these things then no interference pattern emerges. If we block off slit B, the highest incidence of impacts is opposite slit A, and if a device is used that distinguishes the electrons passing through A from those passing through B, then there will be an area of highest incidence opposite each slit. Under such conditions, Polkinghorne’s first premise is true. But his second premise is either false or irrelevant to the conclusion—false if it concerns an electron that has passed through slit A; irrelevant if it concerns an electron whose place of passage is undetermined.

Of course, this is not Polkinghorne’s entire argument, but only one half of its preliminary part. The second half of the preliminary part is the repetition of this argument with “slit B” taking the place of “slit A” in the premises and the conclusion.
  1. If E passes through slit B, then E is most likely to strike the screen at a point opposite B.
  2. E is not most likely to strike the screen at a point opposite B.
  3. Therefore,
  4. The electron did not go through B.
That part of the argument is, obviously, just as futile with “B” in it as it was with “A,” but let it stand for the moment, so that we can consider the would-be clinching part of the argument: the supposedly Holmesian conclusion that the electron passed through both slits. Polkinghorne’s stated reasoning is:
  1. E does not pass through slit A.
  2. E does not pass through slit B.
  3. Therefore,
  4. E passes through both slits.
There is nothing Holmesian about such a conclusion at all: Sherlock Holmes would not mistake such a bald non sequitur for a deduction. Nor does it take the intellect of Sherlock Holmes to see that what follows from lines 3 and 6 is:
E does not pass through either slit.
    Of course, this is not the desired conclusion at all. Not only does Polkinghorne fail to establish his desired intermediate conclusions (3 and 6); he has set up his entire argument to establish the wrong final conclusion. The undesired conclusion can be avoided if we go back to re-write the previous two subordinate arguments by inserting the qualifier “solely” before the phrases “through slit A” and “through slit B,” and adding a further premise, “E passes through at least one slit.” But the fact remains that the intermediate conclusions 3 and 6 are not established or even made in the slightest degree credible by any of Polkinghorne’s reasoning.

    The defects in Polkinghorne’s argument can be brought out by means of an analogy. A graph of the distribution of heights among adults in the United States looks like this (this graph and the two that follow are taken from this Web page by John D. Cook Consulting):

    Graph 1: height distribution of all adults in US

    The numbers along the bottom represent height in inches. The midpoint of the peak is around 67 inches. Let R be a randomly selected adult resident of the United States. According to this graph, R is most likely to be 67 inches tall. So R is not most likely to be, say, 64 inches tall, or 70 inches tall.

    But R may, and indeed (setting aside rare cases of indeterminate sex) must, be either female or male. Suppose that R is female. The distribution of heights for adult females has a peak around 64 inches:

    Graph 2: height distribution of adult females in the US

    So if R is female, R is most likely to be about 64 inches tall. By contrast, the distribution for adult males has a peak around 70 inches.

    Graph 3: height distribution of adult males in the US

    So if R is male, R is most likely to be about 70 inches tall. Now imagine that, with these facts in hand, statistician John Jokinghorne presents us with the following argument:

    1. If R is female, then R is most likely to be about 64 inches tall (by graph 2).
    2. R is not most likely to be about 64 inches tall (by graph 1).
    3. Therefore,
    4. R is not female (from 1 and 2).
    5. If R is male, then R is most likely to be about 70 inches tall (by graph 3).
    6. R is not most likely to be about 70 inches tall (by graph 1).
    7. Therefore,
    8. R is not male (from 4 and 5).
    9. Therefore,
    10. R is both female and male (supposedly Holmesian conclusion from 3 and 6).

    Clearly, all three of the conclusions in this argument are non sequiturs. Conclusion 3 does not follow from (1) and (2), because adding the supposition that R is female, as in (1), makes (2) false or irrelevant to (3). The same applies to the relation of premises 4 and 5 to conclusion 6. And the would-be Holmesian conclusion is of stupefying inconsequence. One may think that Polkinghorne’s argument cannot be as bad as Jokinghorne’s, because it is not so obviously bogus; but logically considered, it is every bit as bad. Its logical defects are exactly analogous. They just happen to be less conspicuous because of the more recondite subject matter.

    One last observation: Presumably, Polkinghorne intends his argument to establish something not just about some randomly selected electron in the experiment but about every electron in the experiment, namely that it passes through both slits. The analogous conclusion of Jokinghorne’s argument would be:

    1. Every American adult is both female and male.

    If Jokinghorne’s argument does not incline you to accept this conclusion (and it shouldn’t), then neither should Polkinghorne’s argument incline you to accept his conclusion about the double-slit experiment.

    There may be compelling reasons in quantum mechanics to say that each electron goes through both slits, but whatever those reasons may be, Polkinghorne fails to state them. Making a popular exposition of quantum mechanics requires making the reasoning that leads to its paradoxical conclusions clear. Instead of this, Polkinghorne’s book offers the kind of confused thinking that can at best produce only incomprehension and that at worst produces the false belief that one has understood something when in fact one has merely participated in the author’s own confusions.

    Monday, September 30, 2013

    Philosopher Defends B***s***

    Stephen Asma argues that, because philosophers have failed to formulate a criterion to distinguish science from pseudo-science, the claims of traditional Chinese medicine cannot be dismissed. But it turns out that all that he thinks important is whether the treatments are effective—a question that he thinks immune to critical examination because it is not the sort of thing about which professional philosophers can engage in a lot of sophisticated-sounding talk.


    The greater one-horned rhinoceros: one of the species being 
    hunted to extinction to supply the market for traditional 
    Chinese medicine (source: National Geographic)


    Is The Stone, the philosophy blog of the New York Times, meant to be a platform on which professional philosophers can commit the intellectual equivalent of soiling themselves in public, or is my perception just biased by my attention to a few bad examples? I don’t know, but a piece by Stephen Asma published yesterday, “The Enigma of Chinese Medicine” (The Stone, September 29, 2013), certainly falls within the category of public trouser-fouling. Actually, it is an example of something even more contemptible than that: the employment of philosophical sophistication in the service of intellectual confusion.

    The argument of Asma’s piece, to the extent that it has one, is that, because philosophers have failed to solve the problem of demarcating science from pseudo-science, one cannot reject the claims of certain “alternative” medical practices, specifically traditional Chinese medicine (“TCM”) and feng shui. After opening with an anecdote about how he recovered from a cold shortly after ingesting a Chinese preparation of freshly spilled turtle blood and strong liquor, Asma intimates that one cannot rule out the possibility that the Chinese concoction has curative powers because of the persistence of what philosophers of science call “the demarcation problem”:
    The contemporary philosopher of science Larry Laudan claims that philosophers have failed to give credible criteria for demarcating science from pseudoscience. Even falsifiability, the benchmark for positivist science, rules out many of the legitimate theoretical claims of cutting-edge physics, and rules in many wacky claims, like astrology — if the proponents are clever about which observations corroborate their predictions. Moreover, historians of science since Thomas Kuhn have pointed out that legitimate science rarely abandons a theory the moment falsifying observations come in, preferring instead (sometimes for decades) to chalk up counter evidence to experimental error. The Austrian philosopher Paul Feyerabend even gave up altogether on a so-called scientific method, arguing that science is not a special technique for producing truth but a flawed species of regular human reasoning (loaded with error, bias and rhetorical persuasion). And finally, increased rationality doesn’t always decrease credulity.

    We like to think that a rigorous application of logic will eliminate kooky ideas. But it doesn’t. Even a person as well versed in induction and deduction as Arthur Conan Doyle believed that the death of Lord Carnarvon, the patron of the Tutankhamen expedition, may have been caused by a pharaoh’s curse.
    Setting aside the appeals to authority, Asma’s main claims here are these: (1) No one has identified a reliable criterion for distinguishing science from pseudo-science. (2) No one has given a credible specification of a method distinctive of science. (3) Increased rationality or practice in induction and deduction does not always decrease credulity toward kooky ideas.

    The third claim seems pretty clearly irrelevant to Asma’s thesis. The case of Arthur Conan Doyle’s belief in the curse of Tutankhamen (it is disappointing that Asma does not cite the more instructive case of Doyle’s belief in the Cottingley fairies) might be of some relevance if it were an instance in which someone was led to supernaturalistic conclusions by sound deductive and inductive reasoning (supposing such a thing to be possible). But it is not; it is an instance of the distortion of judgment by cognitive bias. Such cases remind us that cognitive bias afflicts all human beings without exception. That is precisely why we need instruction in deductive and inductive reasoning, as well as knowledge of the cognitive biases themselves: only then can we, sometimes, rise above our worse selves and correct our judgment in empirical matters when it goes awry. But, as Asma makes no further reference to these observations in his piece, I will say no more about them.

    What of the first two claims? Granted, for the sake of argument, that there is no known universal criterion or distinctive method demarcating science from pseudo-science, how is that supposed to lend credibility to the claims of traditional Chinese medicine? Asma never makes this clear, but the implied reasoning seems to be this: “Many people dismiss traditional Chinese medicine as pseudo-science; but there is no way to distinguish in principle between science and pseudo-science; therefore, one cannot dismiss traditional Chinese medicine.” If this is the intended argument, it fails on two scores. In the first place, the fact that one cannot give a universal criterion for distinguishing A from B does not show that there is no difference between A and B, or that one is unjustified in identifying something as an instance of A and not of B. There is, for example, no commonly accepted explanation of the distinction between right and wrong: it doesn’t follow that one can’t soundly and justly judge of some action that it is wrong. Second, even if Asma could show that traditional Chinese medicine escapes the charge of pseudo-science, it would not follow that its claims have the slightest degree of credibility. “Scientific” doesn’t imply “warranted” or “sound,” and “not pseudo-science” doesn’t imply “not a load of bollocks.”

    I have said that this seems to be Asma’s argument, but in the end it is not clear that Asma even intends to make an argument. What we get instead in the conclusion of the piece is just a certain insinuation. After providing a second anecdote of his undergoing a treatment by traditional Chinese medicine and subsequently feeling decidedly better, he concludes the piece with these paragraphs:
    It seems entirely reasonable to believe in the effectiveness of T.C.M. [traditional Chinese medicine] and still have grave doubts about qi. In other words, it is possible for people to practice a kind of “accidental medicine”—in the sense that symptoms might be alleviated even when their causes are misdiagnosed (it happens all the time in Western medicine, too). Acupuncture, turtle blood, and many similar therapies are not superstitious, but may be morsels of practical folk wisdom. The causal theory that’s concocted to explain the practical successes of treatment is not terribly important or interesting to the poor schlub who’s thrown out his back or taken ill.

    Ultimately, one can be skeptical of both qi and a sacrosanct scientific method, but still be a devotee of fallible pragmatic truth. In the end, most of us are gamblers about health treatments. We play as many options as we can; a little acupuncture, a little ibuprofen, a little turtle’s blood. Throw enough cards (or remedies), and eventually some odds will go your way. Is that superstition or wisdom?
    If this is the summation of Asma’s position, then the preceding references to the demarcation problem, far from being an argument, are just a sort of preparatory entertainment and are inessential, if not altogether irrelevant, to his main point. According to what Asma says here, it makes no difference whether traditional Chinese medicine is science or pseudo-science, because the legitimacy of its theoretical claims—about qi and so forth—is irrelevant to claims of its effectiveness. The “fallible pragmatic truth” of such claims is what really matters.

    Notice what has happened here. Having spent most of the piece considering the theoretical claims of traditional Chinese medicine and invoking the demarcation problem to argue—by a villainous non sequitur—that because we can’t solve the problem, we can’t dismiss such claims, he now says that it is only the claims of effectiveness that matter. And with regard to such claims, he coyly suggests, we are wise to take a chance on the treatments because we can’t know that the claims for them are false.

    What sort of claim is Asma talking about here? And how are such claims to be assessed? Notice that there is a world of difference between claims of the following two sorts:
    (1) I underwent treatment T, and subsequently my headache went away.

    (2) Treatment T is effective against headache.
    Obviously, the second claim is much stronger than the first. For one thing, it has a generality to it that the first lacks. But more than that, it asserts a causal connection that the first one does not. To establish claim (1), all that is needed is evidence that the speaker had a headache before she underwent T and ceased to have it afterward. To establish claim (2), we require evidence (a) that quite generally people with headaches who undergo T lose those headaches after undergoing T and (b) that this is not due to factors extraneous to T, such as the headaches going away by themselves. Anecdotes on the lines of claim (1) can support point (a), and many people content themselves with finding evidence of this nature—a manifestation of confirmation bias. But such anecdotes, no many how many in number, fail to establish, or even to support, claim (2) if they are not supplemented by evidence supporting point (b). That sort of evidence is by far the more difficult sort to procure. It requires experimental controls, such as blinding and, optimally, even double-blinding (arranging that neither the administrator nor the recipient of the treatment knows whether T is being administered or not). And, of course, in any study of such matters, the significance of the results must be assessed according to the size of the sample, the basis of selection, possible sources of bias, and so forth. Such considerations are the ABCs of the “scientific method” on whose existence sophisticated philosophers of science like Feyerabend (invoked by Asma in the passage quoted earlier) would cast such scornful doubt.

    What is exasperating about Asma’s piece is that he proceeds as if none of this elementary scientific method existed, or mattered. He seems to reason that, once the theoretical claims of traditional Chinese medicine are set aside, no critical assessment of the remaining claims is possible: anecdotes are all the evidence that anyone has or needs, because the choice of a treatment is always a gamble anyway. Well, of course there is an element of uncertainty in any medical treatment; in that respect, any choice of treatment is a gamble. But the fact that we don’t know exactly what is going to happen in any specific instance does not entail that we don’t know a great deal about the effectiveness of treatments. And of course it is possible that one or another Chinese treatment will eventually be found to be genuinely effective (i.e., more effective than placebo); that does not entail that we don’t already know a lot of them to be ineffective, or that we have as much reason to believe as to disbelieve the claims of effectiveness made for others of them. (See, e.g., Joe Nickell, “Traditional Chinese Medicine: Views East and West,” Skeptical Inquirer, March–April 2012.)

    Pondering the “demarcation problem” is a professional specialty of philosophers. Evaluating the effectiveness of “alternative” medical treatments is not: it is a matter for medical scientists, though the results of their research can be judged and appreciated by persons without medical or other scientific training. Asma, having gotten to the end of his professional expertise by reflecting—uselessly and irrelevantly, as it emerges in the end—on the demarcation problem, proceeds as if there is simply no further basis for a critical assessment of the claims of traditional Chinese medicine. But there manifestly is: it just isn’t a matter of philosophical expertise. The specifically philosophical elements of Asma’s piece turn out in the end to be nothing but a blind for making it look as though claims of effectiveness made for traditional Chinese medicine and feng shui were beyond reach of critical examination.

    Tuesday, April 16, 2013

    Terrorism Close to Home

    A terrorist attack may bring forth responses that are ugly, stupid, crazy, or all three, in various measures. But the most common response is just what such acts aim at: terror.



    Photograph from Bloomberg via the Telegraph (UK)

    Around 3:50 yesterday afternoon, I looked at my Facebook page and was puzzled to read a post by someone of my acquaintance saying simply that she was “safe.” I was tempted to ask her what she had been up to that might have put her in danger, but as I looked further down the page, it became apparent that something terrible had happened that affected quite a few people. I went to a news site and was horrified to learn of the deadly violence that had struck in Copley Square an hour previously, about three miles away from where I sat.

    I am glad to be able to say that, so far as I have learned, no one of my acquaintance was among those killed or injured by the blasts. But I think all of us who live in the area feel in some obscure way wounded by it.

    And then there are those who have quite different reactions. My previous entry on the Westboro Baptist Church has been made somewhat more timely by the group’s announcement on its Twitter feed that it proposes to show up at the funerals of victims of yesterday’s incident. The Phelpses close their message with the assertion that “GOD SENT THE BOMBS IN FURY OVER F*G MARRIAGE!” I once remarked in this blog that natural disasters have a tendency to bring forth self-nominated prophets, ready to invoke divine causes for natural events. But those who consider themselves privy to divine intentions are also ready to render the same public service when sorrow is brought about by human hands, as shown by Pat Robertson and Jerry Falwell in their remarks shortly after the attacks of September 11, 2001 (observed in the third indented paragraph in this blog entry).

    But you don’t have to believe in supernatural causes to reject the most obvious natural causes of events: you may simply believe in vast hidden conspiracies, in the manner of apopheniac extraordinaire Alex Jones, who needed little evidence before declaring that the bombings were yet another false-flag operation by the United States government. (Added after posting: Elisabeth Parker at Addicting Info shows how Alex Jones rearranges the dots so that he can connect them.)

    The Phelpses and Alex Jones are clearly extreme examples of systematic cognitive (and, at least in the case of the Phelpses, not just cognitive) distortion. But even those of us who do not go to such extremes as a rule may be blown a bit off course by the force of an extraordinary event like this. One looks about with apprehension and suspicion, on guard for signs of the “next” attack, though in fact, the time and place of one such extraordinary event are among the least likely for the occurrence of another. Terrorist acts do occur, but in these parts it stands to reason that one is in less danger of one happening now than one was before yesterday’s incident. Bruce Schneier puts the point well in a piece published on the day of the event in The Atlantic on line:
    . . . Terrorism is designed precisely to scare people—far out of proportion to its actual danger. A huge amount of research on fear and the brain teaches us that we exaggerate threats that are rare, spectacular, immediate, random—in this case involving an innocent child—senseless, horrific and graphic. Terrorism pushes all of our fear buttons, really hard, and we overreact.

    But our brains are fooling us. Even though this will be in the news for weeks, we should recognize this for what it is: a rare event. That’s the very definition of news: something that is unusual—in this case, something that almost never happens. 
    When we learn of a terrorist attack, we naturally follow what in cognitive psychology is called the availability heuristic, which is the natural human tendency to estimate the probability of an event of a certain class according to the “availability,” in our minds, of instances. The stronger the emotional charge on an instance, the more readily it comes to mind, and the more we tend to overestimate the probability of an event of that kind occurring. We do not necessarily fall into the craziness of religious maniacs and conspiracy fantasists, but our cognitive game is certainly below its best. Terrorism works by playing on this cognitive weakness, and making us feel that we are in much more danger than we actually are.

    Schneier follows his observation with an instructive historical reminder:
    Remember after 9/11 when people predicted we’d see these sorts of attacks every few months? That never happened, and it wasn’t because the TSA confiscated knives and snow globes at airports. Give the FBI credit for rolling up terrorist networks and interdicting terrorist funding, but we also exaggerated the threat. We get our ideas about how easy it is to blow things up from television and the movies. It turns out that terrorism is much harder than most people think. It’s hard to find willing terrorists, it’s hard to put a plot together, it’s hard to get materials, and it’s hard to execute a workable plan. As a collective group, terrorists are dumb, and they make dumb mistakes; criminal masterminds are another myth from movies and comic books.
    I’m taking a plane trip soon, and I am not eager to learn what additional inconveniences I shall have to endure. That is a much more realistic worry, I think, than any suspicion of another attack.

    Sunday, April 14, 2013

    Sane People with Insane Beliefs

    People who believe crazy things are not necessarily crazy; but neither are beliefs sane just because the people who hold them are so.

    Photo taken from The Lonely Conservative

    In a previous post on this blog (“Lewis Black on Creationism,” April 1, 2011), I included a video of Lewis Black, in a comedy performance, saying this:
    There are people who believe that dinosaurs and men lived together, that they roamed the earth at the same time. There are museums that children go to in which they build dioramas to show them this. And what this is, purely and simply, is a clinical psychotic reaction. They are crazy. They are stone-cold fuck nuts.
    As much as I relish Black’s comic exaggerations, I don’t accept them as literal truth, and I suspect that he didn’t so intend them either. Present a young-earth creationist with a problem about plumbing or accounting or gardening and I am pretty sure that he or she will respond to it as rationally as anyone else. It is only when a religious question arises, or rather a question to which their religious beliefs dictate an answer, that they talk like crazy people. If religious extremism were to be regarded as a psychosis, it would have to be a localized and artificial one. And eccentric beliefs are manifestations, not causes or constituents, of any condition that would be deemed psychotic in medical practice.

    Louis Theroux has made a couple of documentaries in which he visits and converses with members of the Phelps family, the people behind the notorious Westboro Baptist Church: The Most Hated Family in America (2007) and America’s Most Hated Family in Crisis (2011). I find it natural to describe these people as “loonies” or “wackos”; and to say of them, in Black’s words, that they are “stone-cold fuck nuts” is almost irresistible. But it is plain to any sort of fair scrutiny that they are not insane: it is merely their beliefs and their way of thinking that are so.

    Yet that does not make them any the less disturbing. On the contrary, their demonstration that sane people can embrace an insane outlook is part of what makes them disturbing.

    These people seem to have answers to any objections that one might raise against their views. I don't believe it would be possible to make any progress in argument with them (and I certainly would not care to try). What I might think of as an appeal to reason or evidence they would, I imagine, dismiss as relying on a “humanistic” perspective—as contrasted with “God’s” perspective, which is the one that they claim to take. And if I move to explain away their behavior in terms of ignorance and delusion, they will just as readily explain away my outlook as due to the influence of Satan.

    Does this mean that there is no rational basis for choosing between my “humanistic” perspective and their supposedly divine one? No; it just means that neither side can persuade the other.

    And yet, the matter will not rest there. For no one who accepts empirical evidence, scientific method, and logical and conceptual coherence—all of which may be gathered, very loosely, under the name of “reason”—rather than scripture, dogma, and personal influence as proper sources of authority in judgment can be content to regard such a practice as a mere private taste or predilection. The appeal to reason is an appeal that all human beings make and must make in determining what is the case. But some do so in the service of convictions that are not only implausible in themselves but that have implications that conflict with common experience, common sense, or common decency. They reason, but they are not reasonable.

    The people of the Westboro Baptist Church provide one illustration of this phenomenon. Another, I think, is provided by right-winger Alan Keyes, who in an interview recently offered the following account of the movement for marriage rights for same-sex couples : “The aim is not compassion for homosexuals, respect for homosexuals, and all of this; the aim in the mind of these hard-headed, calculating, leftist, Communist totalitarians is to destroy the family and to establish the notion that once you have seized power there is no limit whatsoever to what you can do.” (Recording and transcript at Right Wing Watch.)

    Wednesday, May 4, 2011

    Tavris and Aronson’s Mistakes Were Made (But Not by Me): Reading Notes

    A book arguing for the power of the concept of cognitive dissonance to explain “why we justify foolish beliefs, bad decisions, and hurtful acts” lacks one thing: a defensible explanation of what cognitive dissonance is.


    The following is not a review but merely a comment on one particular point in the book Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts by Carol Tavris and Elliot Aronson,1 namely its failure to explain the concept and the associated theory that are the central theme of its argument. I ought perhaps to mention at this point, since you might think otherwise upon reading what follows, that I found the book immensely instructive and disturbing in a potentially very salutary way. Its strength lies in its description and analysis of the various ways in which our need to feel justified in what we think, say, and do drives us to think, say, and do wrong and harmful things. Its weakness lies in its failure to explain the rubric under which it does this work of description and analysis, the concept of cognitive dissonance.

    * * *

    Before I read this book, I was acquainted with the term “cognitive dissonance” but had only a rather vague notion of what it means. Having read the book, I have a better idea of what it means, and of the psychological research that is associated with it; but the book contains no satisfactory explanation either of what cognitive dissonance is or what cognitive dissonance theory is. The authors repeatedly say that cognitive dissonance theory predicts this and cognitive dissonance theory predicts that, but they never tell us what the theory is—an omission that diminishes not only the usefulness of their book but also the credibility of their argument. We cannot make any informed judgment of the value of the theory if we are never told what it is, but told only of its alleged predictive successes.

    Aronson and Tavris offer an explanation of the term “cognitive dissonance” at one point; but it is quite inadequate. It occurs just after an account of the researches of social psychologist Leon Festinger and his collaborators on the response of the followers of a pretended seer, one Marian Keech, to the failure of her prophecy that on a certain date a spaceship would come to rescue them before the earth would be destroyed.2 One might suppose, if one has not previously observed how the adherents of such prophecies behave when confronted with the failure of them, that the followers would be disillusioned and see that their faith in Mrs. Keech was misplaced. But Festinger, the authors report, made a more nuanced, specific, and, as it transpired, more accurate prediction:
    The believers who had not made a strong commitment to the prophecy—who awaited the end of the world by themselves at home, hoping they weren’t going to die at midnight—would quietly lose their faith in Mrs. Keech. But those who had given away their possessions and were waiting with the others for the spaceship would increase their belief in her mystical abilities. In fact, they would now do everything they could to get others to join them. (12)
    At the end, the authors observe, “Mrs. Keech’s prediction had failed, but not Leon Festinger’s.” They then move on to the theory to which they credit this prediction—the theory of cognitive dissonance. They write:
    The engine that drives self-justification, the energy that produces the need to justify our actions and decisions—especially the wrong ones—is an unpleasant feeling that Festinger called “cognitive dissonance.” Cognitive dissonance is a state of tension that occurs whenever a person holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent, such as “Smoking is a dumb thing to do because it could kill me” and “I smoke two packs a day.” Dissonance produces mental discomfort, ranging from minor pangs to deep anguish; people don’t rest easy until they find a way to reduce it. In this example, the most direct way for a smoker to reduce dissonance is by quitting. But if she has tried to quit and failed, now she must reduce dissonance by convincing herself that smoking isn’t really so harmful, or that smoking is worth the risk because it helps her relax or prevents her from gaining weight (and after all, obesity is a health risk, too), and so on. Most smokers manage to reduce dissonance in many such ingenious, if self-deluding, ways. (13)
    The authors cite the pair of thoughts “Smoking is a dumb thing to do because it could kill me” and “I smoke two packs a day” as an example of “two cognitions that are psychologically inconsistent.” But is there any inconsistency at all between these two thoughts? Certainly they are not logically inconsistent: it is possible for both to be true. Nor is there any kind of probabilistic conflict between the two: it does not defy probability that both should be true. The authors say, in the paragraph immediately following the one just quoted, “Dissonance is disquieting because to hold two ideas that contradict each other is to flirt with absurdity . . .” But there is no contradiction between the two cognitions in the example.

    The authors say that the two cognitions are psychologically inconsistent. But what is that supposed to mean? That no one can affirm both thoughts at the same time? But surely people can do so; if they could not, then this pair of cognitions could not be an example of cognitive dissonance! Wherein, then, is the “psychological inconsistency” supposed to consist? Perhaps in the fact that affirming both thoughts creates discomfort? But the discomfort was supposed to be the effect of a so-called psychological inconsistency. If the so-called inconsistency is nothing other than the discomfort itself, then the definition amounts to saying that psychological dissonance is the state of tension that occurs whenever a person holds two cognitions that produce a state of tension—which tells us essentially nothing.

    It is a dismal failing for a book to give no satisfactory explanation of the very concept that is at the core of its argument. We are left to figure out for ourselves what the concept is from the evidence of the use that the authors make of it.

    One point about the concept that is clear is that it has an immediate bearing on the common human proclivity for self-justification. It is, in fact, supposed to provide the answer to the question implied by the book’s subtitle: “why we justify foolish beliefs, bad decisions, and hurtful acts.” We justify, or attempt to justify, such things because it is difficult for us to accept that our beliefs have been foolish, our decisions bad, or our acts hurtful. It is surely these negative evaluations of ourselves that are the source of the discomfort of which the authors speak. In the example quoted above, there is, as I said earlier, no inconsistency between the thoughts “Smoking is a dumb thing to do because it could kill me” and “I smoke two packs a day”; but the combination of those thoughts entails the thought “I do a dumb thing.” That implication, and not any inconsistency between the first two thoughts, is the source of our discomfort. To reduce dissonance, we must do things, or rather think things, that will allow us to avoid accepting that conclusion.

    It seems to me that all of the examples discussed by the authors fit under this explanation of the concept better than they fit under the explanation that they give. Marian Keech could not give up the idea that she had visionary powers because she had built so much of her understanding and evaluation of herself upon that idea. Her most devoted followers could not give up that idea precisely because they had devoted themselves to her in quite costly ways: to admit that their faith in her was misplaced would be to admit that they had been extravagantly foolish.

    Further, it is evident that many cases that fit under the authors’ definition will not illustrate what they mean by cognitive dissonance. Suppose, for instance, that I remember distinctly, or seem to remember distinctly, leaving a book in a certain place a short time ago, but that when I return to that place, I don’t find the book there (and suppose also that I am alone in my room when this has gone on). This may cause me perplexity, consternation, irritation, frustration, and other unpleasant emotions, but it will not give rise to what Aronson and Tavris seem to have in mind when they use the term “cognitive dissonance.” Certainly it will not drive me to try to explain the non-appearance of the book in self-justifying ways. Rather, my reaction will most likely be first to look around to see if the book has fallen down somewhere, and then, if that does not lead to the discovery of it, to conclude that my memory is at fault: I must have put the book somewhere else and forgotten doing so. Yet here we clearly have a case of discomfort produced by an inconsistent pair of cognitions—“I left the book right here (and no one else has been around to move it)” and “The book is not here.” There is no cognitive dissonance involved because the conflict between these two cognitions does not, or does not seriously, threaten my evaluation of myself. It does compel me to acknowledge the faultiness of my memory, but it will not be the first thing to have done that.

    In sum, what the authors talk about under the heading of “cognitive dissonance” is not, as they say in their attempt at a definition of the term, an inconsistency between two cognitions, but an inconsistency between some body of cognitions and our estimation of ourselves.

    17 June 2012: Correction made in penultimate paragraph: “will not give rise to” replaces “would give rise to.”

    * * *

    After writing the comment above, I came across the following passage in the Wikipedia article “Cognitive Dissonance”:
    An overarching principle of cognitive dissonance is that it involves the formation of an idea or emotion in conflict with a fundamental element of the self-concept, such as “I am a successful/functional person,” “I am a good person,” or “I made the right decision.”
    I wish that I had a better source for the attribution of this principle to the concept or the theory of cognitive dissonance than Wikipedia, but as far as it goes, it confirms the argument that I developed independently. What puzzles me is that something so obviously important would fail to make its way into the argument of Mistakes Were Made. Elliot Aronson, also according to Wikipedia (the article on him), “is listed among the 100 most eminent psychologists of the 20th Century,” “is the only person in the 120-year history of the American Psychological Association to have won all three of its major awards: for writing, for teaching, and for research,“ and “in 2007 . . . received the William James Award for Lifetime Achievement from the Association for Psychological Science.” Why he and Carol Tavris failed to include this essential point in their exposition—which is virtually a non-exposition—of the central concept of their book, I do not know, but the fact that they did so confirms my suspicion that sloppiness in the handling of crucial concepts is very common in the discipline of psychology.


    BIBLIOGRAPHICAL REFERENCES

    1Carol Tavris and Elliot Aronson,  Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts (Orlando, etc.: Harcourt, 2007)

    2Leon Festinger, Henry W. Riecken, and Stanley Schachter, When Prophecy Fails: A Social and Psychological Study of a Modern Group that Predicted the End of the World (Minneapolis: University of Minnesota Press, 1956).

    Wednesday, March 16, 2011

    A Rough Introduction to Critical Thinking

    A clip from the video Dara Ó Briain Talks Funny, with a partial transcript.


    The clip embedded above is an excerpt from a video recording of Irish comic Dara Ó Briain (pronounced “dah-ra o-bree-an”) in performance at the Hammersmith Apollo Theatre in London in 2008. In this clip, he addresses himself to popular forms of ignorance and misunderstanding regarding matters of scientific knowledge (“a general kind of lack of knowledge about science,” as he says at 0:20). Ó Briain can be a bit rough on those who propagate defective forms of thinking (“Jesus, homeopaths get on my nerves!”), and his performance, being stand-up comedy rather than a lecture, does not include much presentation of evidence pertinent to the evaluation of claims: hence my description of this as a “rough introduction” to critical thinking. But his act shares with critical thinking the aims of exposing folly and revealing truth.

    Of course, a performance like this is made to be seen and heard, not to be read in transcribed form. Nonetheless, I find much of it so pithy and so well said that I like to have the words before my eyes. So by all means, watch the video before you read what follows. But once you have watched it, if you find Ó Briain’s words as well chosen as I do, you may want to refer to the following transcript of the stretch of this performance running from about 1:40 to 4:20.
    But there’s a kind of notion that “Every opinion is equally valid.” My arse! Bloke who’s a professor of dentistry for forty years does not have a debate with some idiot [eejet] who removes his teeth with string and a door, right? It’s nonsense! And this happens all the time with medical stuff on the television. You’ll have a doctor on and they’ll talk to the doctor and be all “Doctor this” and “Doctor that,” and “What happened there?” and “Doctor, isn’t it awful?”, right? And then the doctor will be talking about something with all the benefit of research and medical evidence, and they’ll turn away from the doctor in the name of “balance,” and turn to some—quack—witch doctor—homeopath—horseshit peddler on the other side of the studio!

    And I’m sorry if you’re into homeopathy. It’s water! How often does it need to be said? It’s just water. You’re healing yourself; why don’t you give yourself the credit? Jesus, homeopaths get on my nerves, with the old “Well, science doesn’t know everything”! Well, science knows it doesn’t know everything, otherwise it would stop. But it’s aware of it, you know? Just because science doesn’t know everything doesn’t mean that you can fill in the gaps with whatever fairy tale most appeals to you.

    “Oh, well, the great thing about homeopathy is that you can’t overdose on it.” Well, you can fucking drown! I’m sorry: it seems harsh, and I used to be much more generous about it, but right now I would take homeopaths and I would put them in a big sack with psychics, astrologers, and priests, and I’d close the top of the sack with string, and I’d hit them all with sticks. And I really wouldn’t worry who got the worst of the belt of the sticks, right? Anyone who in answer to the difficult questions in life, to “I don’t know what happens after I die,” or “Please, what happens after my loved ones die?” or “How can I stop myself dying?”—the big questions—gives them an easy bullshit answer, and you go, “Do you have any evidence for that?”, and they go, “There’s more to life than evidence”: get in the fucking sack!

    I’m sorry, “Herbal medicine! Oh, herbal medicine’s been around for thousands of years!” Indeed it has, and then we tested it all, and the stuff that worked became “medicine,” and the rest of it is just a nice bowl of soup and some potpourri, so knock yourselves out. “Chinese medicine, oh, Chinese medicine! But there are billions of Chinese, Chinese medicine must be working.” Here’s the skinny on Chinese medicine: A hundred years ago the life expectancy in China was 30. The life expectancy in China at the moment is 73. And it’s not feckin’ tiger penis that turned it around for the Chinese. Didn’t do much for the tiger either, if you don’t mind me pointing out.
    There is one further joke at the expense of the Chinese before the next burst of laughter and applause from the audience, but I have omitted it, as I think it appears to disadvantage when transcribed.

    Sunday, March 13, 2011

    A False Truism

    The common saying “Everything happens for a reason” is neither true nor a truism, but a swindle in which the preposterous is peddled in the guise of the obvious.


    Logo of the True/False Film Festival

    A truism is a statement that is self-evidently true. A false truism would be a statement taken for a truism that is in fact not one, either because it is true but not self-evidently so or because it is not true at all. In the latter case, it is doubly false: it is not a truism, and it is not true. The saying “Everything happens for a reason” is a false trusim of this double-dyed sort.

    How does a falsehood get mistaken for a truism? Typically by a woolly-minded, or a devious, confusion with a truism. The saying “Everything happens for a reason” gets its hold on people’s minds, or at least their mouths, by a confusion of elements of two truths that are entirely distinct from it and from each other.

    If you deny the saying “Everything happens for a reason,” people who are attached to it may react by saying, “So you think things can happen for no reason at all?” And now you may find yourself embarrassed; for an affirmative answer seems to imply that you think that things can happen without any cause. Thus, the saying in question gains some appearance of cogency from its suggestion of the entirely distinct thought that for everything that happens, there is a reason why it happens. The latter thought is, if not a truism, at least a truth, apart from such arcane reaches as quantum mechanics and cosmogony. It means merely that everything that happens is a consequence of some cause or causes.

    Why, for example, does the sun go higher in the sky in summer than in winter? Because the earth’s axis is tilted relative to its orbit, and summer is the time of year when the polar tilt in a given hemisphere is toward the sun, winter the time when it is away from the sun. Why has my car’s fuel mileage suddenly gotten worse? I don’t know why, but I will take it to a repair shop so that a mechanic can find the reason. And so on. These are examples of the use of the concept of a reason why something happens.

    The phrase “for a reason” has an entirely different meaning and a different range of application. We can ask for what reason someone does this or that, but it makes no sense to ask about the reason for an occurrence that is not the act of an intelligent agent. For instance, say a creaking sound comes through the ceiling. We might ask: “Why does that happen?” The answer might be: “Someone is walking around in the apartment upstairs.” That is the reason, or a reason, why the creaking happens. We might then ask further: “Why is the person upstairs walking around?” The answer might be: “She has things to do around her apartment (and why shouldn’t she walk around up there, anyway?).” That is the reason—or, again, a reason—for her walking around, or her reason for walking around.

    Now consider the question: “For what reason does the ceiling creak?” This is a conflation of two different forms of expression. The ceiling does not creak for a reason; the ceiling does not have a reason for creaking. There is a reason why the ceiling creaks, but that is another matter entirely. It is senseless to attribute reasons to the ceiling because the ceiling is not an intelligent agent. If the person asking this ill-formed question meant exactly what he or she says, then he or she would have to think that the ceiling is an agent and that creaking is something that it does intentionally; for only then would it be intelligible to ask for what reason it does so. More likely, though, the question is just an affected or confused way of asking, “What causes the ceiling to creak?” (or more simply, “Why is the ceiling creaking?”).

    So it is fair to say, “For everything that happens, there is a reason why it happens,” or to say, “Everything that is done intentionally is done for a reason.” The former is a truth, arguably a truism, and the latter certainly a truism, as it merely explicates the meaning of the expressions “intentional” and “(to do something) for a reason.” But when people say “Everything happens for a reason,” they do not mean either one of these things, though their utterance gains its appearance of plausibility from its suggestion of both. What do they mean? It is not easy to answer this question, as the utterance gains its hold on people’s minds precisely by its confusion and obscurity.

    One cannot translate nonsense into sense, but one can sometimes identify a coherent thought that is half-expressed, half-concealed in an incoherent utterance. In the case of the saying “Everything happens for a reason,” the half-expressed, half-concealed thought is that everything that happens does so because some intelligent agent, whether human or superhuman, makes it happen for some reason. But the saying can only appear truistic by omitting all mention of agency. It incoherently combines the expression “for a reason,” which implies an agent, with “things happen,” which implies no agent (as I noted in my previous entry in this blog with reference to a recent utterance by Newt Gingrich).

    Once the implicit thought is made explicit, it loses all appearance of truism, and indeed of plausibility. If someone said, “Everything that happens is intentionally made to happen by some agent or other,” the utterance, if it were not simply dismissed with a snort, would provoke such questions as “How do you know that? What agent or agents do you have in mind? What basis can you possibly have for such an extravagant claim? Do you seriously mean to imply that when I sneeze, there is a sneeze-spirit of some kind that makes me sneeze? Or that God pushes the molecules around to tickle my nose?” And so on. Few people would be willing to commit themselves to such a fatuous claim. Yet millions of speakers are unashamed to utter and to accept a saying in which this very thought is conveyed by subterfuge.

    The saying is not just confused, preposterous, and dishonest: it is also insulting to victims of serious misfortune. Those who say to such persons, “Everything happens for a reason,” are almost certainly playing either Polyannas or Job’s comforters. The Polyannas mean that your misfortune serves some good end beyond itself. The Job’s comforters mean that you had it coming to you. Both meanings are obnoxious, as they trivialize the victim’s suffering and even put the victim in the wrong for feeling it. I include the qualification “almost certainly” in my statement because it is just possible that such people intend a different meaning: they could (though I doubt that many do) mean that God, or whatever spirit caused your misfortune, did so for a reason that has nothing to do with justice or goodness. The point is not to console the sufferers but to remind them that we are all helplessly in the shit together. This, to my mind, is the primary thought of the Book of Job, as I have argued in a previous entry, contra Rabbi Harold Kushner; though most people, Rabbi Kushner among them, prefer to impose a more conciliatory meaning upon that terrible tale.

    Friday, March 11, 2011

    How Many Forms of BS Can You Spot in This Utterance?

    Newt Gingrich on his dark past: “There’s no question that at times in my life, partially driven by how passionately I felt about this country, that I worked far too hard, and that things happened in my life that were not appropriate.”



    Former Republican Speaker of the House Newt Gingrich recently gave an interview to David Brody of the Christian Broadcasting Network. The first of the three clips posted by Brody at CBN.com (March 8, 2011) begins with him asking Gingrich the following rather elliptical question (the transcriptions that follow are my own):
    You know the question, and I’m not going to ask it the way everybody else will ask it, but as it relates to the past, and some of those personal issues that you’ve had. You’ve talked about how God is a forgiving God, and I’d like you to expand upon that: as you went through some of those difficulties, how you saw God’s forgiving nature in all of that.
    Such is Brody’s delicacy that he never actually says what “the question” is. Perhaps he is presuming that his viewers will know that Gingrich is now on his third marriage; that his relationship with the woman who became wife no. 2 started while he was married to wife no. 1; that he initiated a divorce from wife no. 1 when she was recovering from surgery for uterine cancer; that his relationship with the woman who became wife no. 3 started while he was married to wife no. 2; that he initiated a divorce from wife no. 2 on the day when she was diagnosed with multiple sclerosis; and that he has a history of further marital infidelities. (For Gingrich’s marital history, see the pages at About.com on Gingrich’s first and second marriages; for his other infidelities, see this article at Frontline.) These matters are presumably the “personal issues” to which Brody vaguely refers. Gingrich replies:
    Well, I mean, first of all, there’s no question that at times in my life, partially driven by how passionately I felt about this country, that I worked far too hard, and that things happened in my life that were not appropriate. And what I can tell you is that when I did things that were wrong, I wasn’t trapped in situation ethics, I was doing things that were wrong, and yet—I was doing them. I found that I felt compelled to seek God’s forgiveness—not God’s understanding, but God’s forgiveness—and that I do believe in a forgiving God. And I think most people, deep down in their hearts, hope there’s a forgiving God.
    Now, to be fair, Brody did not ask Gingrich to confess his misdeeds, but only to tell how he understood God’s forgiveness in relation to those misdeeds, whatever they were. Nonetheless, to speak intelligibly of being forgiven, one must at lest acknowledge misconduct. And Gingrich does indeed get around to saying that he “was doing things that were wrong.” It is interesting, though, to see how much evasion and obfuscation he commits before he gets there. Consider his first sentence: At times in my life, partially driven by how passionately I felt about this country, I worked far too hard, and things happened in my life that were not appropriate. There are so many forms of dishonesty and cowardice packed into this fairly short utterance that it is instructive to try to identify them individually.

    (1) Let us start with the most obvious one: “partially driven by how passionately I felt about this country.” One is reminded of Samuel Johnson’s remark upon the resort to patriotism by scoundrels. Here Gingrich suggests that the ultimate motive of his marital misconduct was love of country—or, as the headline of an article by Jack Stuef at Wonkette more satirically puts the claim, that “Newt Gingrich committed adultery because America made him horny.” By trying to attribute his bad conduct to a good motive, Gingrich follows the most commonly practiced strategy of reply to the bullshit interview question “What do you consider your greatest weakness?”, namely to admit to a weakness that is really a strength. In fact, he virtually repeats the best-known bullshit answer: “I sometimes care about my work too much!”

    (2) To be sure, Gingrich includes the qualifier “partially,” as if sensing that, without it, his assertion might be a more blatant absurdity than even people who consider him a credible political figure would be able to accept. But that merely compounds the disingenuousness of his statement. The absurdity is not the idea that love of country can be the sole motive to betraying one’s marriage partner, but that it can be such a motive at all. The addition of the word “partially” is a sop thrown to those credulous or dull-minded enough to miss this point.

    (3) Perhaps what Gingrich means to attribute to his love of his country is not his marital infidelities but only his working “far too hard,” with the implication that this in turn created the conditions leading to such misconduct. But how so? We have only the bare conjunction of the phrases “I worked far too hard” and “things happened in my life that were not appropriate.” There is no indication of how those two facts are supposed to be related. The attempt to draw blame from his conduct off into the forgivable or even laudable habit of “working too hard” is lost in vagueness.

    (4) Compare the following two phrases:
    (a) I worked hard.
    (b) Things happened.
    Notice that the speaker of (a) identifies himself as an agent, while the speaker of (b) does not identify any agent at all, but only uses the vague grammatical subject “things.” When Gingrich is speaking of conduct that may be reckoned to his credit, he identifies himself as an agent: “I worked far too hard.” When he is speaking of his misconduct—perhaps to describe him as “speaking of it” gives him too much credit; “obliquely alluding to it” seems nearer the mark—he disappears in a puff of evasion: “things happened in my life.” This is, of course, a variant of that watchword of the inveterately irresponsible, “Mistakes were made.”

    (5) “Not appropriate.” I have saved the worst for last. I know of no phrase whose use so concisely manifests the collapse of moral intelligence as does this one. But that collapse is not at all peculiar to Gingrich; it can be observed wherever English is spoken. An epidemic of stultification seems to have robbed people of the command of intelligent moral vocabulary. Having apparently lost command of terms like “outrageous” (now more commonly used, idiotically, as a term of praise), “unconscionable,” “irresponsible,” “cruel,” “selfish,” “base,” “dishonest,” and so forth, to say nothing of simple and obvious ones like “bad” and “wrong,” people wishing to speak of misconduct find nothing at their disposal but a puffed-up term of etiquette.

    Surely we all know what “appropriate” means. A fur hat is not appropriate to wear with a linen suit; “fuck” is an inappropriate word to use in polite company; a Phillips-head screwdriver is not appropriate for driving slotted-head screws. The word “appropriate” is what logicians call a two-place predicate, one that indicates a relation between two things: paradigmatically, a is appropriate to b. What is not appropriate to one thing is typically appropriate to some other. To describe acts of marital infidelity as “things that were not appropriate” implies that their only fault is that they were done at the wrong time, on the wrong occasion, or with the wrong person, in some sense of “wrong” not yet specified—as, for instance, a plaid tie is wrong (inappropriate) to wear with astriped shirt.

    Of course, it is safe to presume that Gingrich, like all other people who use this cretinous and obfuscating jargon, does not intend any of these implications. He surely does not mean that he chose the wrong women with whom to betray his wives, or the wrong occasions for doing so. But what does he mean? An associate with whom I was discussing Gingrich’s interview on Facebook made the comment: “The real mistake here is thinking that Mr. Gingrich attaches any meaning other than dog-whistle meaning to his words.” Setting aside the question whether Gingrich has pitched his whistle correctly for the evangelical Christian audience that he hopes to influence, this seems to me correct. When Gingrich describes his former conduct as “not appropriate,” there is not much to be said about what, if anything, he means by his words, in the sense of intending something capable of being true or false. Yet he surely means to do something by uttering those words. I would say that he means to indicate repentance without actually acknowledging misconduct. He does not admit to having acted selfishly, exploitatively, deceptively, cruelly, or irresponsibly; he does not admit to having acted at all; he simply describes “things that happened” in his life as “not appropriate.”

    Well, that’s my attempt to analyze the utterance of this paragon of dishonesty and moral cowardice. Does anyone see anything that I have missed?

    Thursday, May 20, 2010

    Three Kinds of Religious Beliefs

    Religious beliefs contain both natural and supernatural elements. The natural elements do more than the supernatural ones to make systems of religious belief rationally untenable in light of science.



    Moses at Sinai: lithograph by F. W. McCleave, 1877

    There is a common tendency—at least, it seems to me very widespread—to equate religion with religious belief. Whatever convenience such an equation may have for thinking about Christianity, it makes nonsense of Judaism. To say that someone “practices Judaism” is perfectly intelligible; to say that someone “believes Judaism” is a bizarre combination of words.

    Nonetheless, it is plain that there are Jewish beliefs, that is, beliefs characteristic of Judaism, or at least of this or that variety or denomination of Judaism. Some of these beliefs may even be considered to be foundational, in the sense that they provide a rationale for religious observances. The nineteenth-century movement to preserve traditional Jewish observances called itself “Orthodoxy”—“correct belief”—for a reason: it also meant to preserve, or rather to establish, a body of specifically Jewish doctrine or dogma. [1]

    But what sorts of beliefs may be counted as religious ones? Consider the following three propositions as examples:
    1. The Torah (i.e., the Pentateuch) was written down in the Sinai desert by Moses more than three thousand years ago.
    2. The Torah was dictated to Moses by God.
    3. God exists.
    All three of these are, I take it, Jewish religious beliefs. But they are plainly different in their relation to natural fact.

    The first proposition does not imply, or at least need not be interpreted as implying, any supernatural element. It concerns a matter of historical, or more broadly natural fact.

    The second proposition has both a natural and a supernatural element. The natural element is just what is stated in (1), that the Torah was written down by Moses more than three thousand years ago. The supernatural element is the idea that this writing-down was a taking of divine dictation. (I use the phrase “written down” rather than simply “written” so as not to exclude that idea a priori: to say that the Torah was written by Moses might be understood to imply that he was its author rather than merely, as per (2), its original scribe.)

    The third proposition I take to be of purely supernatural significance. Of course, I have not tried to define the terms “natural” and “supernatural,” but rather than take on that difficult task, I will simply take the two terms to be sufficiently well understood for my purposes. My three examples are meant to illustrate the distinction that I propose among three kinds of religious belief: (1) natural beliefs, (2) mixed natural–supernatural beliefs, and (3) purely supernatural beliefs.

    The points that I want to make about these three kinds of belief are the following. First, while people tend to identify religious belief with beliefs of the third type, such as the belief that God exists or beliefs about the divine nature, a very large part of religious belief consists of natural elements. In consequence, many religious beliefs are not essentially religious, in the sense that they are such that it is possible for someone to believe them without accepting any religious doctrine that contains it. Someone might, for instance, believe that Moses wrote the Torah in the Sinai desert without believing that God had anything to do with the matter.

    Second, natural and supernatural elements are often tightly connected. For instance, though someone might believe that Moses wrote down the Torah but not believe that he did so under divine dictation, no one can believe that God dictated the Torah to Moses without believing that Moses wrote it down. That is a matter of logic. Other connections are a matter of psychology. Thus, while it is possible to believe, say, that a worldwide flood killed all land animals but those on Noah’s ark without believing that God had any hand in it, it is not likely that anyone—any adult of much education at any rate—would ever do so. That is, many natural religious beliefs are held only because of some accompanying supernatural religious belief.

    Third, to the extent that a body of religious belief contains natural elements, it is subject to critical examination in the light of science. If it were established that the Torah was written down by Moses in the desert more than three thousand years ago, scientific investigation would be powerless to settle the question whether he was taking divine dictation. But the fact is that no such hypothesis is established, or, in view of the evidence, capable of being established. On the contrary, the findings of archaeological investigation as well as textual analysis render the belief that the Torah was written all at once, hundreds of years before the rise of the kingdoms of Israel and Judah, completely untenable. [2]

    Fourth, even if the supernatural as such is beyond the reach of scientific criticism, mixed natural–supernatural beliefs are not. If it can be proved that the Torah was written hundreds of years after the time in which even the latest events recounted in it are purported to take place—which it can, unless one understands “prove” to signify a standard of certainty that is never attained in any empirical science—then the idea that Moses wrote it under divine dictation is also thereby refuted.

    Fifth and finally—though this is not a point for which I shall be supplying the necessary argument in this entry—Judaism, like Christianity, is thoroughly dependent on natural beliefs and mixed natural–supernatural beliefs that are rationally untenable in the light of known evidence and scientific arguments. Even if purely supernatural beliefs, such as the belief in an almighty and supremely wise and benign creator and ruler of the universe, are given a free pass, specific natural and mixed beliefs are required for supporting a body of specific religious observances; and some of the most important of those beliefs are not rationally tenable.


    REFERENCES

    [1] On the question of preserving versus establishing, see Menachem Kellner, Must a Jew Believe Anything? (London: Littman Library of Jewish Civilization, 2006).

    [2] On archaeology, see Israel Finkelstein and Neil Asher Silberman, The Bible Unearthed: Archaeology’s New Vision of Ancient Israel and the Origin of Its Sacred Texts (New York: Free Press, 2001). On textual analysis of the Bible, see Richard Elliott Friedman, Who Wrote the Bible? (New York: HarperCollins, 1997).



    Previous entry: Funny Word, Funnier Concept

    Next entry: What Beliefs Are Jewish Beliefs?

    Thursday, April 22, 2010

    More Insights into the Ways of God

    The bright side of natural disasters: they always bring us prophets!



    Eyjafjallajökull; photograph by Reuters from Telegraph.co.uk

    Reading God’s intentions off natural events is a great game: any moron—and not only morons but even persons of intelligence, provided that they indulge in the intellectual habits of morons—can play it. The recent earthquake in China and the more recent volcanic eruption in Iceland, though disasters for millions of people, have brought forth a harvest of prophet-cretins. Here are three of them, one for each of the three Abrahamic religions:

    For Judaism, Rabbi Lazer Brody, writing on his blog Lazer Beams on April 16:
    Some people think they’re smart, like the British folks who run the British Advertising Standards Authority (ASA). The day before yesterday, the senseless stuffed-shirts declared that the Western Wall and the site of our Holy Temple in Jerusalem are not part of Israel, banning Israeli Tourist adverts that included photos of these holy sites.

    The bumbling Brits didn’t realize that when you mess around with Jerusalem and the Wall, you mess around with Hashem. . . .

    So what did Hashem do?

    Hashem let a remote volcano in Iceland erupt, from the Icelandic mountain Eyjaffjalljokull [sic], whose ash cloud grounded all air traffic above Britain yesterday, leaving thousands of passengers stranded.
    Well, at least the events that Rabbi Brody regards as cause and effect had some geographical connection: the eruption of Eyjafjalljökull (if you want to learn how to pronounce it, spend a few minutes studying this page and practicing) did indeed ground all air traffic over Britain. Of course, it grounded traffic over most of continental Europe as well, which seems a rather excessive, not to say ineffective, way of punishing a few supposed “stuffed shirts” in the British Advertising Standards  Authority; but I suppose that such grossness of aim and disregard of the innocent is nothing new in the record of God’s supposed exhibitions of wrath.

    For Islam, Iranian cleric Hojatoleslam Kazem Sedighi, as reported on April 19 by the Associated Press:
    “Many women who do not dress modestly . . . lead young men astray, corrupt their chastity and spread adultery in society, which (consequently) increases earthquakes,” Hojatoleslam Kazem Sedighi was quoted as saying by Iranian media. Sedighi is Tehran's acting Friday prayer leader. . . .
    “What can we do to avoid being buried under the rubble?” Sedighi asked during a prayer sermon Friday. “There is no other solution but to take refuge in religion and to adapt our lives to Islam's moral codes.”
    Now I don’t want to make Sedighi appear more foolish than he actually is: as far as I know, he was speaking about earthquakes in Iran, rather than ones in far-off places like China!

    Last and decidedly least, for Christianity, Rush Limbaugh (sorry, but Pat Robertson seems not to have spoken up on this occasion) on his radio show on April 16 (transcribed by me from this recording at Media Matters):
    You know, a couple days after the health care bill had been signed into law, Obama ran around saying, “Hey! You know, I’m looking around here, the earth hasn’t opened up. No Armageddon out there, the birds are still chirping.” Well, I think the earth has opened up. God may have replied. This volcano in Iceland has grounded more—air space has been more affected than even after 9/11 because of this plume, because of this ash cloud, over northern and western Europe. . . . Earth has opened up. I don’t know whether it’s a rebirth or Armageddon. Hopefully, it’s a rebirth—God speaking.
    In fairness to Limbaugh (not that he particularly deserves it), he does not flatly attribute the volcanic eruption to divine wrath over the passage of the health care bill, but says only that it may be God’s reply. Yes, it may be that God is a Republican and is offended by the health care bill, and that he reacts to legislation that offends his sensibilities with retribution, only a few weeks late and a few thousand miles wide of the mark. Or it may be that Rush Limbaugh has no idea of what he is talking about. The latter seems to me by far the more plausible explanation.



    Previous entry: Dishonesty in Hertz’s Torah Commentary

    Next entry: You Have Been Spammed