Disarming the Data Doctors: How To Debunk The “Public Health” Basis For “Gun Control”


By Richard W. Stevens**

Headlines scream:

“School Violent Deaths Soar – Guns Kill Most Victims”


“Study Confirms Link Between Guns and School Killings”


“Guns are Prime Suspect in 77% of School Violent Deaths”

Data from one recently published study in the Journal of the American Medical Association (JAMA) could provoke headlines like these1. Few reporters will check the facts behind such headlines. Readers (and viewers) will likely not question the conclusions of a published scientific study. A news article will seldom do more than summarize the article’s key data and conclusions. Gun prohibitionists can use the “scientific” to press for stronger “gun control” laws.

Gun rights advocates might respond by asserting that a study is “biased” or “bogus.” Some might fall back on the old argument that “an occasional murder is the price of the Bill of Rights.” These sorts of arguments lack substance, and never convince anyone but believers.

How can thoughtful gun rights advocates reply to unfavorable “evidence” from “scientific” studies? There are at least three ways: (1) Understand the “public health” approach behind the studies, (2) Know the methods and terms of the studies, and (3) Identify the key flaws and limitations in the studies. We show you how to apply these methods using one influential study as an example.

Making Crime a Public Health Issue

When people start to view a problem as a “disease,” people naturally turn to doctors to “cure” it. Lawyers, criminologists, philosophers, clergymen, and politicians do not cure diseases. The tendency to trust in doctors to diagnose and treat disease offers financially or politically ambitious doctors a clear incentive to call social problems “diseases.” Turn a problem into a disease, and doctors become the healers.

Doctors who oppose private ownership of firearms jumped at the chance to become “gun control” experts. The Founding Fathers considered firearms ownership an inalienable right.2 They thought owning guns was a political, not a medical, matter. Recently, some doctors have made firearms ownership a public health problem.3

What brought the doctors into the subject of guns? Doctors got involved with firearms issues by declaring injuries “caused by” guns to be a “disease.” The guns themselves became a factor in causing the “disease.” If guns became a disease factor, then whoever possesses a gun is a carrier of a disease-causing agent.

One “gun control” advocate explains how they did it:

How on earth do handgun injuries relate to public health? Anything that unnecessarily contributes to human disease, injury, or death is a proper concern of public health … If enough people were injured and killed in hang glider accidents, hang gliders would become a concern of public health professionals (as, for example, motorcycles are today.)4

The Centers for Disease Control and Prevention (CDC) has led the movement to treat firearms injuries as a disease and to apply public health methods to suggest “treatment.”5 The CDC’s strategy has three main elements:

(1) Tracking “firearm deaths” and injuries to monitor changing rates and to define high-risk groups;

(2) Using epidemiological studies to define risk factors and to suggest “possible intervention strategies”;

(3) developing and evaluating specific remedies.6

Put simply, whatever affects the health of “the public” is a “public health” issue. Firearms use causes a “disease” (injuries and deaths) affecting thousands of people. Therefore, the logic goes, the use of firearms is a public health issue.7

Epidemiology: Public Health’s Chief Weapon

Epidemiology is often used to address a public health problem. Epidemiology is the study of how and why disease is distributed in a population. Epidemiology tells us how much disease there is, who gets it, and what specific factors put individuals at risk.8

Epidemiologists gather and use statistical data to explain disease conditions in a population. Epidemiological studies try, among other things:

(1) to calculate the number of diseased persons there are;

(2) to predict the number of diseased persons there will be in the future;

(3) to predict the future costs of treating and caring for the diseased persons;

(4) to isolate the cause(s) of the disease;

(5) to determine how the disease is transmitted.9

The gun prohibitionists used a clever ploy to gain “scientific” support for their position and then multiply its public relations value. First, the doctors devised and published epidemiological studies in medical journals, which showed that the problem of gun injuries is large and serious. Then, follow-up studies and articles would quote the earlier studies’ conclusions as fact, without ever mentioning the limitations or qualifications on those conclusions.10

How Epidemiological Studies Work

Epidemiology has a good reputation for helping to find the causes and modes of transmission of some kinds of occupational and endemic diseases.11 The epidemiological study measures the incidence of disease.12

There are several types of epidemiological studies.13 One common type is the “retrospective” study. A retrospective study looks at the occurrence of disease in the past. By contrast, the “prospective” study starts in the present and charts the occurrence of disease in the future.

The retrospective type of study begins with the researchers precisely defining the “disease.”14 The researchers then select the possible risk factors that might be causing the disease. Next, the researchers choose a population to study. They will try to find a relationship between the risk factors and the disease in this selected population. Finally, they select a study method.15

Researchers often conduct “retrospective case-control” studies. Here is how such a study works. From the selected population, the researchers:

(1) Gather a list of persons who have the disease;

(2) Gather a list of persons who do not have the disease, but whose relevant characteristics match those of the first (diseased) group;

(3) Interview every person on both lists — ask questions to determine whether each person has been exposed to the suspected risk factor or not; if the person has been exposed, try to determine how much exposure the person has had;

(4) Using standard statistical methods, compare the percentage of diseased persons who were exposed to the risk factor, to the percentage of “healthy” (non-diseased) persons who also were exposed to the risk factor.

[Researchers compared the percentage of lung cancer patients who had smoked, to the percentage of lung cancer patients who had never smoked. The researchers found that a much larger number of lung cancer patients had been smokers than had been non-smokers.]

(5) If the statistics show that persons exposed to the risk factor were more likely to develop the disease than persons who were not exposed to that risk factor, then there is a “positive association” between the risk factor and the disease.

[Researchers did find a “positive association” between exposure to the risk factor (smoking) and the disease (lung cancer).]16

The 1993 Kellermann Study: Guns as Homicide Risk Factor

A. The Headliners

One well-known researcher, Arthur C. Kellermann, M.D., with a variety of co-authors, has published several articles based on epidemiological studies which promote “gun control.”17 One such article was “Gun Ownership as a Risk Factor for Homicide in the Home,” published in the widely- cited New England Journal of Medicine in 1993.18 The headline-grabbing conclusions of that article were:


  • “Keeping a gun in the home was strongly and independently associated with an increased risk of homicide.”
  • “Rather than confer protection, guns kept in the home are associated with an increase in the risk of homicide by a family member or intimate acquaintance.”19These sweeping conclusions deserve scrutiny because they influenced policy makers and citizens. Gun rights advocates can defeat the Kellermann article’s conclusions if they understand the study, its assumptions, and its limitations.

    B. The Background Facts

    Here is how Dr. Kellermann and his colleagues set up their study. They defined the target “disease” as any death by homicide in or very near the victim’s home.20

    They selected a number of possible risk factors in the victim’s life, including the victim’s use of alcohol, trouble because of alcohol, use of illicit drugs, trouble because of drugs, history of fights inside or outside the home, history of injuries from fights inside the home, arrest history, guns kept in the home, dogs kept in the home, use of security devices, and living alone.21

    The Kellermann researchers studied a population of 388 cases (net) of homicides which occurred in the victims’ homes. Victims had to be at least 13 years old to be included in the study. The homicide cases were drawn from the records of homicide investigations in the most populous county in Tennessee, Washington state, and Ohio. The study examined records for part or all of the period from August 23, 1987 to August 23, 1992 (depending upon the county).

    The researchers chose the retrospective case-control study technique. As the victims were dead, the researchers used proxies — a friend or relative — to get information about the victims.22

    C. How the Study Was Conducted

    The Kellermann researchers collected all of the 1,860 official reports of homicides in the three counties. They excluded all homicides that did not occur in the victim’s “home.” Cases involving the homicide victim under 13 years old were excluded, too. They counted multiple murders and murder-suicides as single homicides to avoid double counting.23 They excluded cases where homeowners killed intruders.24

    The Kellermann researchers wanted to see whether having a gun in a person’s home increased or decreased that person’s security. They wanted to compare the residents’ risk of homicide in homes with and without guns. The specific question was whether, all other things being equal, persons who possessed firearms in their homes were more likely to be homicide victims than persons who did not possess firearms in their homes.

    From the official homicide investigation records the researchers got the victims’ age, race, sex, and the police reconstruction of the incident. Three weeks after the death of the homicide victim, the researchers contacted a friend or relative of the victim. That friend or relative would serve as the “proxy” for the victim. The proxy would answer the researchers’ questions on behalf of the victim. Each victim’s proxy described that victim’s use of alcohol or drugs, living arrangements, security precautions, history of previous violence, and gun ownership.25 In epidemiology jargon, each victim was a “case.”

    The researchers also selected a “control group.” A control group is supposed to be made up of persons who matched the “cases.” Thus, the “control” for each “case” was a person who lived in the same neighborhood as the victim, and who was also of the same sex, age range, and race as the victim.26

    The Kellermann researchers interviewed the proxies and an equal number of matched “controls.” Before they conducted the interviews, the researchers sent letters to the proxies “outlining the nature of the [research] project” and offered a $10.00 incentive to sit for the interview.27 The researchers also explained the project to the “controls” and offered them the $10.00 incentive.28

    The Kellermann researchers reported that their interviews with the proxies and controls were “identical in format, order and content … [and] brief, highly structured, and arranged so that more sensitive questions were not broached until later in the interview.”29 The only sample interview question Kellermann published in the article was:

    Many people have quarrels or fights. Has anyone in this household ever been hit or hurt in a fight in the home?30

    D. The Reported Study Results

    The Kellermann researchers wanted to show which “behavioral factors” and “environmental factors” were associated with homicide in one’s home. They used statistical methods common in epidemiology. Here is a sample of their published statistics and their meaning.31

    (1) Table 1 Data

    Table 1 in the Kellermann article provided information about the homicide victims in the study:32


    Characteristic % of victims
    Sex of Victim
                  Female 37
                  Male 63
    Race or Ethnicity of Victim
                   White 33
                   Black 62
                   Native American 1
                   Asian / Pacific Islander 2
                   Other 2
    Age group of victim
                  15-24 yrs 14
                  25-40 ” 41
                   41-60 ” 25
                   61-up ” 20
                   Altercation or quarrel 44
                   Romantic triangle 7
                   Murder-suicide 5
                   Felony-related 22
                   Drug dealing 8
                   Homicide only 13
                   Other 2
    Relationship of offender to victim
                   Spouse 17
                   Intimate acquaintance 14
                   First-degree relative 10
                   Other relative 3
                   Roommate 12
                   Friend or acquaintance 31
                   Police officer 1
                   Stranger 4
                   Unknown (unidentified) 17
                   Other 1
    Method of homicide
                   Handgun 43
                   Rifle 2
                   Shotgun 4
                   Unknown firearm 1
                   Knife or sharp instrument 26
                   Blunt instrument 12
                   Strangulation/suffocation 6
                   Burns/smoke/scalding 2
                   Other 1
    Victim resisted assailant
                   Yes 44
                   No 33
                   Not noted 23
    Evidence of forced entry
                   Yes 14
                   No 84
                   Not noted 2


    (2) Comments on Table 1 Data

    Look at the “Circumstances” data. Notice how the categories are artificially broken down. A very large fraction of the homicides (86%) fell into one of two more general categories: a conflict between friends or relatives (56%), or illegal activities in the victim’s presence (30%). The remaining number (15%) are not adequately described. By subdividing these major categories, for no apparent reason, the researchers lessened the impact of the percentages in any one category in the table.33

    They used the same technique in the “Relationship to offender” categories in the table. Using multiple categories tends to obscure the underlying common features. In the text of the article, however, the authors grouped the categories back together and admitted that “the great majority of victims (76.7 percent) were killed by a relative or someone known to them.”34

    By contrast, under “Method of homicide,” the table emphasizes firearms’ role by separately listing different types of firearms. When the firearms and non-firearms homicides figures are totaled, however, the results seem different: 50% firearms-related, and about 50% non-firearms related.

    The vast majority of homicides (84%) reported in the table did not involve forced entry. In these cases, the killer likely had permission to enter the area or had a key to the house.

    The study offered no data about whether the victim tried to get his or her gun, or other weapon, before being killed. The study also omits any mention of whether the killer used the victim’s own gun or other weapon. These are critical points. The Kellermann researchers could not properly conclude that guns in the home provide no protection against homicide, if they did not know whether the victims tried to get them, or even if the victims knew where their guns were? The Kellermann researchers could not properly conclude that having a gun in the home is a “risk factor” when there are no data about whether the gun was even involved in the killing?

    Most journalists are not trained to look carefully at epidemiological studies to raise unasked questions. They may tend to overemphasize individual studies.35 To understand the full meaning of a study, and its limitations, requires reading the study carefully and then thinking about whether its data support its conclusions. Headlines based on ill-founded or misunderstood conclusions can seriously mislead policy makers and the public.36

    (3) Table 3 Data

    Table 3 in the Kellermann article shows the degree of association between various “risk factors” and the “disease” (death by homicide in the victim’s residence).37 Portions of Table 3 are set forth below.38


    Behavioral Factors Odds Ratio 95% Confidence Interval
    (a) Victim or control drank alcohol 2.6 1.9 – 3.5
    (b) Drinking caused problems in the household 7.0 4.2 – 11.8
    (c) Any household member had trouble at work because of drinking 10.7 4.1 – 27.5
    (d) Victim or control had trouble at work because of drinking 20.0 4.9 – 82.4
    (e) Any household member used illicit drugs 9.0 5.4 – 15.0
    (f) Victim or control used illicit drugs 6.8 3.8 – 12.0
    (g) Any physical fights in the home during drinking 8.9 5.2 – 15.3
    (h) Any family member required medical attention because of a fight in the home 10.2 5.2 – 20.0
    (i) Any household member arrested 4.2 3.0 – 6.0
    (j) Victim or control arrested 3.5 2.4 – 5.2
    Environmental Factors Odds Ratio 95% Confidence Interval
    (k) Home rented 5.9 3.8 – 9.2
    (l) Victim or control lived alone 3.4 2.2 – 5.1
    (m) Controlled security access to residence 2.3 1.2 – 4.4
    (n) Gun(s) in the home 1.6 1.2 – 2.2
                   Handgun 1.9 1.4 – 2.7
                   Shotgun 0.7 0.5 – 1.1
                   Rifle 0.8 0.5 – 1.3
    (o) Any gun kept unlocked 2.1 1.4 – 3.0
    (p) Any gun kept loaded 2.7 1.8 – 4.0
    (q) Guns kept primarily for self-defense 1.7 1.2 – 2.4


    (4) Understanding the data in Table 3

    Table 3 is a typical “univariate analysis of risk factors.” Here is how it works. Consider entry (a), “Victim or control drank alcohol.” The “Odds Ratio” tells you how much more likely it is that the victim drank alcohol than did the “control” person.39 In this case the victims were 2.6 times more likely to be “exposed” to the risk factor of drinking alcohol, than were the controls.

    Epidemiological statistics often are reported in this way: “Exposure to second-hand smoke is associated with increased future health problems.” The term “is associated with” simply reports a statistical result. An odds ratio greater than 1.0 shows a “positive association.” The more the value exceeds 1.0, the stronger the association. An odds ratio equal to 1.0 means there is indicates “no association.” An odds ratio less than 1.0 shows a “negative association.” A negative association suggests that the exposure to the factor actually decreases the likelihood of contracting the disease.40

    The size of the odds ratio shows the strength of the association. The larger the odds ratio, the more likely that exposure to the risk factor might have caused the disease. Epidemiologists classify the strength of an association according to the size of the odds ratio. An odds ratio from 1.1 to 3.0 shows a “weak or nonexistent” association. An odds ratio between 3.0 and 8.0 shows a “moderate” association; between 8.0 and 16.0 shows a “strong” association; and above 16.0 shows an “extremely strong” association.41


    Odds Ratio
    (Relative Risk)
    Less than 1.0 Negative
    1.0 None
    1.1 – 3.0 Positive None / Weak
    3.0 – 8.0 Positive Moderate
    8.0 – 16.0 Positive Strong
    Over 16.0 Positive Exteremly Strong


    On other public health issues, a survey of scientists showed that most “would not take seriously a single study reporting a new potential cause of cancer unless it reported [an increased risk factor or odds ratio of at least] 3.”42 Even then, these scientists would remain skeptical unless the study were “very large and extremely well done, and biological data support the hypothesized link.”43

    A positive association, even an “extremely strong” association, does not by itself prove that the factor caused the disease.44 Here is an obvious example: there is a perfect positive association between having lived in New York and eventually dying. Everyone who has lived in New York will eventually die. That association by itself does not prove that living in New York causes death.

    In Table 3 (a), the drinking of alcohol is positively associated to being a victim of homicide in one’s own home. The odds ratio of 2.6 shows a weak association.

    Consider entry (l), “Victim or control lived alone.” The odds ratio for that entry shows that the victims were 3.4 times as likely to live alone, as compared to the controls. The victims were thus 3.4 times as likely to be “exposed” to living alone than were the controls. Living alone was associated with being a victim of homicide in the home. With an odds ratio of 3.4, this association is “moderate.”

    Table 3 also reports the “95% confidence interval” for each odds ratio. The confidence interval describes the uncertainty surrounding the odds ratio.

    Consider again Table 3 (a). The odds ratio is 2.6. The 95% confidence interval is 1.9 – 3.5. In other words, there is a statistical 95% chance that the actual effect of having drunk alcohol could be anywhere between 1.9 and 3.5.45

    Here are some important things to know about the confidence interval. First, if the 95% confidence interval for a given odds ratio ranges from below 1.0 to above 1.0, then there is a likelihood that exposure both causes and prevents the disease. This contradiction makes the odds ratio statistically insignificant; epidemiologists can draw no conclusions from it.46

    Second, when the confidence interval is a narrow range around the odds ratio, then it tends to reinforce the accuracy of the odds ratio.47 For example, the association (measured by odds ratio) between drinking alcohol and being a victim of homicide was 2.6. The 95% confidence interval for this figure was fairly narrow (1.9 – 3.5). It is fair to conclude that the odds ratio of 2.6 is an accurate estimate of the risk in the study population.

    Third, when the confidence interval is a wide range, it suggests that there is an increased likelihood of risk from exposure, but also that the risk cannot be accurately measured.48 Table 3 (d), “victim or control had trouble at work because of drinking,” indicates an odds ratio of 20.0, with a 95% confidence interval of 4.9 to 82.4. This result suggests a strong association between the victim’s trouble with drinking at work and his eventual death by homicide, but that the actual risk cannot be accurately measured.

    (5) Comments on Table 3 Data

    Having a grasp of odds ratios and confidence intervals, consider Table 3 again. Which factors have the largest odds ratios? The top 7 are (d), (c), (h), (e), (g), and (b). These factors all implicate using alcohol, using drugs, and a history of physical fighting in the home. The confidence intervals for all of these factors show at least a moderate to strong association between them and death by homicide. The factor most strongly associated with homicide in the home was (d) (trouble at work because of drinking).

    The five factors in Table 3 with the lowest odds ratios, from lowest to highest, are (n), (q), (o), (p), and (m). The four factors with the lowest odds ratios involve having a gun in the residence. All five odds ratios are below 3.0, suggesting a weak or nonexistent association between them and the incidence of homicide in the home. The confidence intervals for all 5 of these factors are relatively narrow, suggesting that these odds ratios more accurately reflect reality.

    Because of their relative weakness, the data in Kellermann’s Table 3 do more to undermine than to support the claim that guns in the home are a risk factor for homicide. This conclusion comes from a careful look at the data. The data simply do not support the researchers’ conclusions in their article.

    (6) Table 4 Data

    Kellermann’s Table 4 shows the results of additional mathematical massage of the data.49 Table 4 provides this information:


    Variable Odds
    95% Confidence
    Home rented 4.4 2.3 – 8.2
    Lived alone 3.7 2.1 – 6.6
    Any household member hit or hurt in a fight in the home 4.4 2.2 – 8.8
    Any household member used illicit drugs 5.7 2.6 – 12.6
    Gun(s) kept in the home 2.7 1.6 – 4.4


    (7) Comments on Table 4 Data

    Table 4 shows how critical it is to read the actual study, rather than relying on headlines about its conclusion. Of the five factors analyzed, the factor with the lowest association with homicide was “guns kept in the home.” The odds ratio of 2.7 for this factor suggests that the association is “weak” at best. The 95% confidence interval reinforces the weak association.

    D. The Kellermann Researchers’ Admitted Limitations on the Study Results

    News stories reporting epidemiological study results often omit the authors’ own statements about the limitations in the study.50 The Kellermann researchers admitted the potential for several sources of bias in their study,51 including:

    (1) the emotional impact of a homicide in the home can “powerfully” affect the survey respondents’ accuracy and completeness of their recollection;

    (2) respondents might misreport sensitive information;

    (3) respondents might have under-reported instances of domestic violence;

    (4) respondent controls might have to under- reported gun ownership.

    The Kellermann researchers also commented on “four limitations”52:

    (1) the study was restricted to homicides occurring in the victim’s home. “The dynamics of homicides occurring in other locations … may be quite different.”53

    (2) the studied urban counties did not have any substantial Hispanic population. Accordingly, their “results may not be generalizable to more rural communities or to Hispanic households.”54

    (3) possible “reverse causation”: some of the association between gun ownership and homicide may have come from the victims having previously “acquired a gun in response to a specific threat.”55

    (4) possible confounding:56 “we cannot exclude the possibility that the association we observed is due to a third, unidentified factor.”57

    The many admitted potential biases and limitations in the study do not appear in headlines. They are rarely reported in news magazines, newspapers, or on broadcast news.58 Also, follow-up journal articles often recite the conclusions without checking whether the underlying data support the conclusions claimed by the authors. Moreover, other kinds of limitations and biases possible in the Kellermann study might not be obvious to the lay reporter or reader. Few, if any, of these sobering considerations seem to matter much to the doctors and others who seize on these study results to support their “gun control” agenda.

    Potential Flaws, Limitations, and Biases in Epidemiological Studies and Their Conclusions

    Epidemiological studies and the validity of their conclusions are subject to a number of limitations. Understanding these limitations is the key to being able to rapidly separate epidemiological hogwash from solid research. The following discussion of these limitations draws some examples from the Kellermann study.

    A. When can a Risk Factor be considered a Cause of the Disease?

    The public health approach to “gun control” has relied heavily on epidemiological studies to support its conclusions that “gun control” is an effective and desirable cure for “gun violence.” This approach to “gun control” argues that possession of firearms is a cause of deaths and injuries by firearms. Therefore, by eliminating possession of firearms, the number of deaths and injuries will decrease. The key word in this argument is “cause.”

    The Kellermann researchers, for example, tried to show a link between the presence of a firearm in a person’s home and the death of that person by homicide, particularly homicide involving a firearm. In other words, the Kellermann researchers wanted to add evidence that the possession of firearms is a cause of death (by homicide). By showing a statistical association between firearm possession and homicide, the researchers sought to establish a causal link.

    The Kellermann researchers did not use the methods that epidemiology normally use to prove a causal link. Statistical association does not alone prove that a factor caused the disease. Epidemiologists use several criteria to prove a causal link.59 These criteria are set forth below, together with a brief analysis of the Kellermann study.

    (1) Probability: A statistically significant association between the risk factor and the disease tends to support the case that the risk factor caused the disease.

    In this Kellermann study, there is a statistical association between the presence of a firearm in the victim’s home and the victims death by homicide in that home.

    (2) Strength of Association: A strong association supports the case.

    In the Kellermann study, the associations for the gun factor were weak, not strong.

    (3) Dose-Response Relationship: If there is a steady increase in disease with an increase in exposure, then the case for causation is supported.

    In the Kellermann study, there was no analysis of a “dose” of exposure. The study considered a homicide victim to have been “exposed” if the victim’s home had contained a firearm in it before the homicide. The study does not analyze whether the length of time the firearm was in the home was a relevant factor. The study does not even report whether the victim actually knew there was a gun in the home or where it might be located there.

    (4) Time-response relationship: If the incidence of the disease rises at some time after the exposure to the factor, and then later decreases, then that fact supports the causation case.

    The Kellermann study did not consider any time-response effects.

    (5) Predictive Performance: If facts about the risk factor help predict the occurrence of the disease, then this supports the causation case. If not, then it weakens the case.

    The Kellermann study did not analyze the predictive performance of their model or data.

    (6) Specificity: If the disease is related to only one risk factor or a related set of risk factors, then this may support the causal link between the risk factor and the disease. (The fact that a risk factor is linked to only one disease contributes little to the conclusion.)

    In the Kellermann study, homicide in the victim’s home was linked to a number of risk factors — but there was no physical link between the victim’s gun ownership and his or her death. The Kellermann study did not even determine whether the victim’s gun was used in the victim’s homicide. The homicides were not related only to gun ownership or possession — they were much more directly and physically related to a number of other factors, such as the killer’s possession of a weapon, and other behavioral factors such as drinking, drug use, and previous violence. The Kellermann study did not establish that homicide in the home was related only to a related set of causes.

    (7) Consistency: If the same association between the risk factor and the disease appears repeatedly in different studies, that would support the argument for causation. Other (different) study results that are inconsistent, and cannot be reconciled, will weaken the argument.

    The Kellermann article said that the subject of the study — the supposed link between firearms ownership and homicides in the home — was “poorly understood. This admission suggests that the Kellermann study was not similar to previous studies. Thus there is little evidence of “consistency” in study results on this subject. The lack of consistency limits the conclusions which can properly flow from the Kellermann study.

    (8) Coherence: If the proposed causal link fits in with current theory and knowledge, then that supports the case. If the causal link is incompatible with known facts, then that weakens the case.

    The Kellermann study fits into the current public health approach to “gun control.” The authors, however, admitted several facts that make their conclusions “valid” only for their study. Moreover, the public health approach to “gun control” is suspect on other grounds.

    B. Problems with the Definition of Disease

    As noted above, an epidemiology study must start with a clear definition of the disease to be analyzed.60 In the Kellermann study, the researchers defined the “disease” as death by homicide in the victim’s home. The causal link they sought to establish was:


    Risk Factor
    Presence of gun in victim’s home —–> Homicide of Victim


    Kellermann’s study results, however, suggested:


    Risk Factors
    History of violence  
    History of drinking ——————–> Homicide of Victim
    History of drug use  
    Presence of guns in home  


    Has the “disease” been defined wrongly? The Kellermann study ignored the possibility that a history of violence, alcohol abuse, and drug abuse, are not “risk factors,” but are symptoms of the disease itself.61 The disease is perhaps anti-social behavior, socio-pathology, or a tendency to adopt self-destructive practices.62 As part of this disease, the person involves himself or herself in situations that lead to violence.63 A certain percentage of such situations result in the victim’s death.

    The homicide victims in the Kellermann study were mainly killed in connection with disputes with people whom they knew. This fact supports the idea that the disease manifests itself as the victim’s behavior. The victim’s behavior affects people with whom he or she interacts. The victim’s behavior starts, invites, or continues violent disputes or heated passions. These passions and disputes can lead to a violent event that ends in the victim’s death. A new model emerges:


    In this model, firearms play a role as a tool for violence, or defense against violence. But sharp and blunt instruments also play that role, according to the Kellermann study. The presence of firearms does not cause the disease of anti-social, or self-destructive behavior. Thus, the mere presence of firearms did not cause the deaths of these victims.

    C. Problems with the Definition of Exposure

    When dealing with a biological disease — the usual case — an epidemiological study tries to determine whether certain risk factors contribute to causing disease in exposed persons. For example, researchers might study the effects of exposure to radiation or polluted water. The researchers would try to estimate how much each person in the study had been exposed to the factor.

    In many real-world cases, the answer to the exposure question is not just “yes” or “no.”64 The Kellermann study assumes that the presence of a firearm in the victim’s home constitutes “exposure.” In other words, being in the physical vicinity of a firearm is the equivalent of being near a person with small pox or influenza.

    Does the mere presence of firearms infect a person, resulting in the person being killed by somebody else? Does the mere presence of firearms make a person violent or disruptive? The Kellermann study does not attempt to answer these questions. It would be relevant to know how long the victim had possessed the firearm, whether the victim had ever used the firearm for any purpose, whether the victim had ever hurt or threatened anyone with the firearm. Apparently the Kellermann researchers did not collect these data.

    It is vitally important, when designing an epidemiological study, to define what constitutes “exposure” to the risk factor. The Kellermann study overlooks all of the possible types and durations of the victim’s “exposure” to firearms. According to the Kellermann exposure definition and conclusions, if a firearm were ever in a house, the resident’s chances of being murdered increase.

    D. Sources and Types of Bias in Retrospective Studies

    Epidemiological studies are subject to several types of bias. A “bias” is a fact that affects the accuracy of the data or their interpretation. Here are some of the biases that can affect the Kellermann study or other retrospective studies.

    (1) Selection bias: Occurs when the population being studied is especially more, or less, likely to have the disease than the general population.65

    The Kellermann researchers looked at the most populous urban counties in three states. They gathered their cases from records of homicides, and gathered their controls from the neighborhood near where the homicide victims had lived. The sampling was not random. The persons most likely to be studied were already victims or still lived in neighborhoods where such murders occurred. In a true random sampling, anybody in the population would be equally likely to studied.

    (2) Information bias: Occurs when there are shortcomings in the way information is obtained from the respondents. Examples of this bias arise when the respondents: do not actually know the answers but guess anyway; have a motivation to give socially acceptable answers; or, give answers they think the investigator wants.66

    The Kellermann researchers “poisoned the well” in the first place by telling the respondents the nature and purpose of the study before asking the questions. Then, they obtained information from persons who likely had a motivation to misreport the truth: the grieving friends and relatives (proxies) of the victim.

    Although the researchers found a “control” person for each victim, they apparently accepted answers about the “control” person from any adult in the same household as the actual “control” person.67 The researchers also offered the respondents money for the interview — they paid for the information, rather than obtain it by objective means.

    Many of the proxy respondents (40%) requested a telephone interview rather than a face-to-face interview. A smaller number of controls (13%) requested the telephone interview.68 Using a validation technique, the Kellermann researchers themselves found that the control respondents underreported domestic violence in their own homes.69 These facts suggest a certain level of discomfort with the interview questions. By avoiding a face-to-face interview, these respondents seemingly demonstrated a concern for the effect or credibility of their responses in the eyes of the interviewer. The respondents apparently also underreported their own domestic violence, doubtless for fear of public scorn or legal action.

    The Kellermann article did not report any attempts to assess the credibility of individual responses. Even had the researchers wanted to evaluate the credibility of responses, using telephonic interviews made that goal much harder to achieve.

    (3) Recall bias: Occurs when the respondent’s memory is incomplete or uncertain. This bias can occur because of the human tendency not to remember events that they considered to be less important when those events occurred. This bias can also arise when researchers repeatedly ask the same or similar questions to get an answer from the exposed respondents, while accepting the first answer they get from members of the control group.70

    Although the Kellermann article reported that the interview proceeded on a strictly structured format, the questioners’ actual conduct is unknown. Also unknown is whether the questioners tried to explain questions to the respondents.

    (4) Volunteer bias: Occurs because persons who are willing to volunteer for a study might differ in important ways from the general population.71

    The Kellermann study depended upon the willingness of persons to answer interview questions. There is no way to know how the respondents’ personal experiences or opinions might have influenced their decision to participate in the study.

    (5) Rumination bias: Occurs because persons who are exposed, ill, or injured may ruminate about (reflect upon) the causes of their condition and thus report different facts about their exposure. By contrast, members of the control group may not ruminate about the matter.72

    This bias likely affects the data from the proxies for the victim. The Kellermann researchers intentionally delayed questioning the proxies “to allow for an initial period of grief.”73 Of course, this delay time allowed the proxies to ruminate about the circumstances, causes and reasons for the victim’s death, and to discuss these factors with other interested persons. A grieving person’s desire to fix blame on some person or thing could influence their responses in ways not even discussed in the Kellermann article.

    (6) Wish bias: Occurs when there is a tendency of the investigator or the respondent to reach a desired result. This bias can arise because persons often prefer to give a more emotionally acceptable explanation for a contracting a disease than exposure by personal choice. Accordingly, respondents may want to blame inanimate objects like toxic chemicals, or external factors like workplace or living environment, rather than attribute their problems to their own choice of lifestyles.74

    The wish bias especially taints case-control studies that use subjective questionnaires. This problem is even more severe when proxies or other family members answer the survey questions instead of the actual patients or controls.75 Also, when an interviewer knows what the study is trying to show, the interviewer may consciously or subconsciously influence respondents to give information consistent with the desired outcome.76

    The Kellermann study methods provided many opportunities for wish bias to arise. The interviewers and the respondents were not “blind,” that is, they both knew the purpose of the study. There is no evidence that interviewers were screened for personal biases concerning the study’s objectives. The Kellermann researchers’ study was a “case-control” study using proxies for the victims and sometimes for the controls. Their procedure thus carried an increased likelihood of wish bias.

    Finally, the respondents might well have wanted to blame the drugs, alcohol, guns, or some other person or factor for the victim’s death, rather than the victim or his lifestyle. Consciously or subconsciously, this desire could have influenced their recall of the facts.77

    (7) Non-random sampling bias: Occurs because the sample of the population may not be perfectly random; questions on a questionnaire are only a sample of the potential questions; and respondents provide only a sample of their possible “correct” answers to the questions. For a sample of anything to be “random,” every member of the potential set must have an equal chance of being chosen.78

    The Kellermann researchers never claimed that the study population was “random.” The article provided little information about the “randomness” of the interview questions or their answers. The researchers admitted that they “had no way to verify each respondent’s statements independently.”79

    E. General limitations of statistical reasoning

    There are three other factors that limit the power of conclusions based on statistics:

    (1) Small sample size: In general, the larger is the study sample compared to the whole population, the more reliable the statistical conclusion can be.80 Epidemiologists are wary about making general statements about public health risks based on study data from small samples because they may not accurately represent the whole population.81

    (2) Past events may not predict future events: Study results might show a statistical relationship between two events, but that does not guarantee that the same relationship will hold in the future.82

    (3) Hasty generalization: The results of a study of a group do not necessarily apply to any individual. A description of a group does not necessarily describe any individual in that group.83

    The Kellermann researchers announced: “Despite the widely held belief that guns are effective for protection, our results suggest that they actually pose a substantial threat to members of the household.”84 Recall that the basis of the Kellermann researchers’ conclusion was a study of only three urban counties. They only studied 388 homicides out of a total 1,860 in those counties for a 2 – 4 year period. This sample represents a tiny fraction of the 24,000 homicides reported each year in the United States.85 Kellermann’s conclusion seems to be a perfect example of a hasty generalization from a very small non-random study.

    The 1986 Kellermann & Reay Study: Firearm-Related Deaths in the Home

    A. The Headliners

    Dr. Arthur L. Kellermann and Dr. Donald T. Reay published an epidemiological study of deaths by firearms in 1986.86 That article, published in the New England Journal of Medicine, boldly pronounced two shocking main conclusions:

    • “We noted 43 suicides, criminal homicides, or accidental gunshot deaths involving a gun kept in the home for every case of homicide for self-protection.”
    • “In light of these findings, it may reasonably be asked whether keeping firearms in the home increases a family’s protection or places it in greater danger.”87

    Did Kellermann and Reay conduct an epidemiologically sound study? Do their statistics support their conclusions? Are there factors that qualify or limit the application of their statistics and conclusions? To answer these questions requires a careful reading of the article.

    B. The Background Facts

    Kellermann and Reay did not expressly define the target “disease” that they were going to study. From the article’s context, however, it appears that they defined the “disease” as any “firearm-related death.”88

    The researchers suggested that “keeping firearms in the home” was the chief “risk factor” for study, although they made some allowance for other factors.89 Their approach was akin to the way epidemiologists might try to discover the risk factors associated with a certain type of cancer or other disease, when they have guessed at least one.90 Indeed, the study’s stated objective was to investigate “all of the gunshot deaths.”91 The researchers were “especially interested in characterizing the gunshot deaths that occurred in the residence where the firearm was kept.”92

    The researchers studied the population which included all of the 743 firearm-related deaths in King County, Washington, that occurred between January 1, 1978 and December 31, 1983. The medical examiner’s case files provided most of the information, although some came from police files and interviews with the actual investigating officers. The total population of King County, which includes Seattle and Bellevue, was approximately 1,270,000 (1980 census).93

    This study was not a case-control study. Rather, it simply looked at the relationships between the reasons for the killings and the possession of firearms. The authors termed it a “mortality study.”94

    C. How the Study Was Conducted

    The Kellermann & Reay researchers reviewed all of the records and data to learn the general demographic information for each victim. They also ascertained:

    (1) the manner of death,

    (2) the location of the incident,

    (3) the circumstances,

    (4) the relationship of the suspect to the victim,

    (5) the type of firearm involved, and

    (6) the blood alcohol level of the victim at the time of the autopsy.

    From the medical records, police records, and interviews, the researchers classified the deaths by the manner of death, the reasons for the death, and the relationships of the home resident to the victim.95 The Kellermann and Reay article displays some of the results in three tables, and many other statistics in the text.

    D. The Reported Study Results

    The Kellermann & Reay researchers reported many results, but displayed only a fraction of them in the tables. Below are their data tables.

    1. Table 1 Data

    Table 1 showed the “Violent Deaths in King County” over the six-year study period. The table excluded traffic deaths, and counted unintentional homicides in the “homicide” category.96 Here are the data (in full):


    of Death


    Number % of Total
    Suicide 1,049 469 45
    Homicide 521 256 49
    Accidental 1,581 11 0.7
    Undetermined 122 7 6
    Total 3,273 743 23


    (2) Comments on Table 1 Data

    The authors did not carefully explain to the reader that the category of “homicides” did not mean “murders.” That category included unintentional homicides, too.

    Data from Table 1 would support summary statements such as:

    “Less than one-fourth of all violent deaths were caused by firearms.”

    “Less than half of all suicides and homicides were caused by firearms.”

    “Over 99% of all accidental violent deaths were not the result of firearms.”

    Rather than comment on this larger meaningful picture, however, Kellermann and Reay subdivided the data into smaller categories to sharen their focus on the firearms. The authors devoted nearly all of the paragraphs of the “Results” and “Discussion” sections of the article to statistics involving firearms and some kind of deaths. One typical paragraph contained this litany of firearms and death:

    Of the 743 deaths from firearms noted during this six-year period, 473 (63.7 percent) occurred inside a house or dwelling, and 398 (53.6 percent) occurred in the home where the firearm involved was kept. Of these 398 firearm deaths, 333 (83.7 percent) were suicides, 50 (12.6 percent) were homicides, and 12 (3 percent) were accidental gunshot deaths….97

    By further tightening the focus on “deaths involving a firearm kept in the home,” the authors were able to generate some apparently frightening statistics in Tables 2 and 3.

    (3) Table 2 Data

    Table 2 in the article presents the statistics from the “Relationship of Victim to Resident in Nonsuicidal Deaths Involving a Firearm Kept in the Home.”98 Here are the complete data from Table 2:




    Stranger 2 3 1.0
    Friend or Acquaintance 24 27 12.0
    Non resident relative 3 5 1.5
    Resident 36 55 18.0
                   Relative 11 17  
                   Spouse 9 14  
                   Roommate 6 9  
                   Self 7 11  
                   Other 3 4  


    With the Table 2 data the authors wanted to show that the “relative risk” of being killed by a firearm kept in the home, depended upon victim’s relationship to the resident of the home. In this table, the starting point is the “relative risk” (the chances) of a stranger being shot and killed by the resident. Thus, the relative risk figure for “stranger” is 1.0. The relative risk of a friend or acquaintance being shot and killed by the resident was 12.0 — which means that, in the study group, 12 times as many friends and acquaintances were shot and killed as were strangers.99

    (4) Comments on Table 2 Data

    The data in Table 2 would seem to justify the statement that “friends and relatives are 12 to 18 times more likely to be shot and killed by armed residents than are strangers.” For the study time, place, and mortality data, that statement would appear to be true for the persons in the study itself.

    The “relative risk” figures, however, were calculations based on less than 100 cases out of a total of 743 firearms- related deaths in one six-year period in one county. The sample surely is too small to draw any conclusions that would apply outside of that county.

    Then, too, the authors used as a base line the number of “strangers” shot by residents. This number did not even include all of the “self-protection homicides” the authors counted in Table 3 (see below).

    The article claimed to provide an “epidemiological” analysis of the data, and reported the “relative risk” figures. The authors, however, did not provide a “confidence interval” to indicate the range of statistical accuracy.100

    Table 2 shows how the choice of categories can shape the newsworthiness of the statistics. The authors attempt to convince the casual reader that the resident’s “relationship” to the victim is relevant to firearm use, rather than inquiring whether the resident was in legitimate fear for his or her life, or acted to protect the life of another person.

    Within the text of the article and in Table 3, however, other facts emerge.

    (5) Table 3 Data

    The authors classify the “398 Gunshot Deaths Involving a Firearm Kept in the Home.”101 Here are their Table 3 data:




    Self-Protection homicide 9 2.3 1.0
                   Justifiable homicide 2 0.5
                   Self-defense homicide 7 1.8
    Unintentional Deaths 12 3.0 1.3
    Criminal homicide 41 10.3 4.6
    Suicide 333 83.7 37.0
    Unknown 3 0.8 0.3


    In this Table the authors compare the number of persons who were justifiably shot and killed, to the number of persons who were killed accidentally, criminally, or by suicide. The “relative risk” value in Table 3 is a statistical way of showing that comparison. The relative risk values in Table 3 show that the researchers counted 4.6 times as many “criminal homicides” as “self-protection homicides.”102 The authors did not provide a “confidence interval” which would describe the statistical accuracy of the “relative risk” figures.103

    (6) Comments on Table 3 Data

    In Table 3 Kellermann and Reay resort to a deceptive word game. They counted the number of deaths and homicides in the “398 gunshot deaths involving a firearm in the home.”104 That figure represents 54% of the total firearm- related deaths. They broke down that 54% into categories of homicide, and then simply calculated the ratios of the different categories to the base line of “self-protection homicide.” The trick was to use the statistical term “relative risk” to report these ratios.

    The word “risk” commonly means “a chance of injury” or “a hazard.”105 By using the word “risk,” Kellermann and Reay could mislead casual readers into thinking, for example, that a person has 4.6 times the “risk” of being wrongfully killed by a firearm in the home than using the firearm for self-protection. In epidemiology, however, “relative risk” has a limited meaning: it is a “ratio of incidence rates.”106

    “Relative risk” is an arithmetic ratio, not a description of a danger or hazard. One could calculate the “relative risk” ratios for various baseball home run hitters: take the worst home run hitter’s record for the year as the base line (1.0), and then compare all of the other hitters’ home run tallies using ratios. You would then have the “relative risk” of a home run for each hitter.

    By using the term “relative risk” rather than just “ratio,” Kellermann and Reay employed an old language trick. In logic, they committed an “equivocation:” they used a word with more than one meaning, and allowed the reader to understand it differently from its actual use.107

    Moreover, the relative risk is a ratio or measure of past events in the study population, not a description of the present or future. Using the double meaning of the word “risk” allows the reader to believe that there is a present or future danger or hazard, when the numbers in Table 3 merely report ratios of past events. The readers’ mistake makes for great anti-gun headlines, however.

    E. The Admitted Limitations and Qualifications

    Likely absent from most reporting of the Kellermann and Reay study are the limitations and qualifications contained in the article itself. Here is a brief list of the authors’ admissions:108

    • The study’s observations came from a large urban population, and might not apply to rural areas;
    • Other metropolitan counties have different rates of homicide, suicide, and firearm ownership, all of which might produce different results;
    • The study did not include “cases in which burglars or intruders are wounded or frightened away by the use or display of a firearm;”
    • The study did not identify how often “would-be intruders have purposefully avoided a house known to be armed;” and
    • The study did not report the numbers of nonlethal firearms injuries.

    Importantly, the authors indicate that a “complete determination of firearm risks versus benefits would require that these figures [above] be known.”109 This final admission is decisive because it shows that the doctors cannot properly prescribe a social policy “cure” for firearms related deaths because they do not have enough information.

    The authors also indicated that they “found the most common form of firearm-related death in the home to be suicide.”110 Indeed, Table 3 data show that there were 8 times as many suicides as criminal homicides. Even so, the authors admit that “the precise nature of the relation between gun availability and suicide is unclear.”111 Citing conflicting studies in Great Britain and Australia, Kellermann and Reay themselves were unable to say that having a firearm in the home was a “risk factor” or “cause” of suicide.112

    The authors also admitted that they could “not determine … whether guns kept for protection were more or less hazardous than guns kept for other reasons.”113 In this admission, however, the authors cleverly inserted the term “hazardous.” Using “hazardous” reinforces the readers’ misimpression that the “relative risk” figures report an actual physical danger rather than a statistical ratio.

    F. The Unadmitted Limitations and Biases

    Hidden in the fine print is a critical fact: Dr. Reay was the medical examiner for King County who officially certified the causes of death.114 For all of the data used in the study, Dr. Reay was the one who had determined whether deaths were suicide or accidental. In other words, the reports of the cause of death came from one of the researchers, and not necessarily from a disinterested third party acting in an official capacity. This fact at least raises a question about the motivations of Dr. Reay when certifying the causes of death.

    One telling sentence further evidences Dr. Reay’s special interest in “gun deaths” while he was acting as medical examiner. The article reports that “[i]n eight cases, the medical examiner’s case files specifically noted that the victim had acquired the firearm within two days of committing suicide.”115 The article does not indicate whether the medical examiner or others inquired about the purchase history of firearms or other weapons in other non- suicide cases. The authors did admit, however, that their “case files rarely identified why the firearm involved [in a death] had been kept in the home.”116

    Also in the fine print was another key fact: “All homicides resulting in criminal charges and all unsolved homicides were considered criminal homicides.”117 Lumping all cases together as “criminal homicide” merely because someone was “charged” with a crime could be a serious defect in the study for several reasons:

    • The article does not specify what kind of “criminal charges” were brought against the gun user — they might not even have been homicide charges;
    • The article does not say whether any of the firearm users were convicted of any charges — the prosecutor or courts might have eventually found the killing justified;
    • The article does not reveal any facts about the firearm user’s mental state at the time of the shooting — the firearm user might have feared for his or her life; and
    • The article does not indicate whether the victim of the shooting was a career criminal, a fact which would objectively support a self- protection classification.

    Another potential problem with mortality studies of this kind is that the populations are not stable. Urban populations changes constantly: the absolute numbers, the number of career criminals, the rate of firearm ownership, the reasons people have for owning firearms, and so on. The authors’ conclusions, with their implications of “relative risk,” do not account for changes in populations that occurred during the study timeframe.118

    This problem relates directly to the Kellermann and Reay study’s failure to address the concept of “exposure.” Epidemiologists typically try to gauge the risk of disease that accrues from a person’s being exposed to certain risk factors or disease carriers.119 The researchers here treated possession of a firearm at the time of the homicide as proof of “exposure.” Whether the person had the firearm for two days or twenty years was not considered. Yet, to estimate the risks related to firearm possession in the home, based on “exposure” to firearms, they would want to know:

    • how long the owner had kept the firearm in the home;
    • if the firearm had ever been used for any purpose before the homicide;
    • if the owner had always known where the firearm was;
    • the owner’s reasons for owning the firearm;
    • whether the owner had other firearms;
    • whether the owner had firearms experience.

    Without these kinds of facts, there is no way to know how and what sorts of “exposure” to firearms might have contributed to the eventual homicides.

    G. Conclusions

    The Kellermann and Reay study provided data showing that firearms in homes were used in homicide, suicide and self-defense in certain proportions in King County over a six-year period. The study data and results do not directly support generalizations about other urban areas or rural areas. The authors used the term “relative risk” to describe what were simply arithmetic ratios of the reported types of deaths in the study records.

    By the authors’ own admission, the study did not consider the crime deterrent effects of firearm ownership in general or by particular individuals. It did not report data about woundings of burglars or intruders. It classified homicides as “criminal” based upon whether “charges” were filed, not upon whether the firearm owner had justifiably used the firearm.

    Accordingly, the study data do not support the authors’ suggestions that firearm ownership in the home is more dangerous than being unarmed.120

    Checklist for Evaluating Study Conclusions

    General things to do to judge the weight and credibility of an epidemiology study:

    1. Read the Actual Study.

    • Do the authors’ conclusions match the headlines?
    • Do the authors’ reported data actually support their stated conclusions?

    2. Evaluate the Underlying Assumptions.

    • Is the “disease” narrowly and specifically defined?
    • Is there an adequate definition of the levels of “exposure” to the risk factor?

    3. Evaluate the Claimed Association.

    • Based on the reported “odds ratio” (or risk ratio), is it a “weak” or a “strong” association?
    • Is the association supported by the 95% confidence interval?
    • Are all of the plausible risk factors assessed?

    4. Evaluate the Causation Claims

    • Do the authors claim that a certain “risk factor” causes a “disease?”
    • Do the authors support their claim with evidence in addition to the statistical association?
    • Does the causation claim make sense?

    5. Evaluate the Data Collection Techniques.

    • Small or unreprerentative sample?
    • Biases in interviewers?
    • Biases in respondents?
    • Are the data themselves accurate?

    6. Evaluate the Logic of the Conclusion.

    • Do the authors assert that their conclusions apply to populations outside of the study population?
    • Do the conclusions improperly attribute characteristics of the group (e.g. averages) to individual members of the group?
    • Do the conclusions include hasty generalizations or prescriptions for general public policy?

    7. Evaluate the Authors’ Admitted Limitations.

    • Do the authors admit potential biases and limitations?
    • Are there some biases or limitations that the authors overlooked?
    • How did the authors try to minimize the potential biases?
    • Do the authors prescribe social policy or political action based only on their study results?

    Using this checklist and the examples in this article, gun rights advocates can learn how to challenge potentially bogus “scientific studies,” debunk the headlines, and disarm the data doctors.


    ** Adjunct professor, Legal Research & Writing, George Washington University School of Law. I thank my wife Connie for her total support and encouragement, and Jay E. Simkin for his incisive editing. Any mistakes are mine.

    1 S. Patrick Kachur, et al., School-Associated Violent Deaths in the United States, 1992 – 1994, 275 JAMA 1729 (June 12, 1996).

    2 Stephen P. Halbrook, That Every Man Be Armed: The Evolution of a Constitutional Right, 58-72, 80-84 (1984).

    3 Craig Zwerling & James A. Merchant, Firearms Injuries: A Public Health Approach, 9 Am. J. Prev. Med. Supp. 1 (1995); see also Thomas B. Cole, Medical News & Perspectives: Franklin E. Zimring on Law and Firearms, 275 JAMA 1709 (June 12, 1996) (interview with law professor who favors public health approach).

    4 Nicholas Johnson, A Public Health Response to Handgun Injuries: Prescription — Communication and Education, 9 Am. J. Prev. Med. Supp 47 (1995) (emphasis added).

    5 Zwerling & Merchant, supra note 3, at 1.

    6 Id.

    7 See, Stephen P. Teret et al., Gun Deaths and Home Rule: A Case for Local Regulation of a Local Public Health Problem, 9 Am. J. Prev. Med. Supp. 44 (1995).

    8 Alan J. Silman, Epidemiological Studies: A Practical Guide 3 (1995).

    9 Id. at 4 (Table 1.1).

    10 E.g. Arthur L. Kellermann, et al., Gun Ownership as a Risk Factor for Homicide in the Home, 329 New Engl. J. Med. 1084 (Oct. 7, 1993), cited in Mary J. Vassar & Kenneth W. Kizer, Hospitalizations for Firearm-Related Injuries, 275 JAMA 1734 & fn. 23 (June 12, 1996), and cited in David E. Nelson, et al., Population Estimates of Household Firearm Storage Practices and Firearm Carrying in Oregon, 275 JAMA 1744 & fn. 8 (June 12, 1996); Arthur L. Kellermann & Donald T. Reay, Protection or Peril? An Analysis of Firearm-Related Deaths in the Home, 314 New Engl. J. Med. 1557 (June 12, 1986), cited in Roberta K. Lee & Mary J. Harris, Unintentional Firearm Injuries: The Price of Protection, 9 Am. J. Prev. Med. Supp. 16, 18 & n. 31 (1995); and cited in David E. Nelson, et al., Population Estimates of Household Firearm Storage Practices and Firearm Carrying in Oregon, 275 JAMA 1744, 1748 & fn.66 (June 12, 1996), and cited in Arthur L. Kellermann, et al., Gun Ownership as a Risk Factor for Homicide in the Home, 329 New Engl. J. Med. 1084 & fn. 4 (Oct. 7, 1993) .

    11 See, H. Checkoway et al., Research Methods in Occupational Epidemiology 4 (1989); Harland Austin et al., Limitations in the Application of Case-Control Methodology, 16 Epidemiologic Reviews 65, 66 (1994); see also Sherwin B. Nuland, Doctors: The Biography of Medicine 244-247 (1988).

    12 Silman, supra note 8, at 11.

    13 Id. at 20-23.

    14 Bert Black & David E. Lilienfeld, Epidemiologic Proof in Toxic Tort Litigation, 52 Fordham L. Rev. 732, 752-753 (1984).

    15 Silman, supra note 8, at 3-25; Black & Lilienfeld, supra note 14, at 750-761.

    16 See Robert H. Fletcher, et al., Clinical Epidemiology 102 (2d ed. 1988).

    17 See Arthur L. Kellermann, Preventing Firearm Injuries: A Review of Epidemiologic Research, 9 Am. J. Prev. Med. Supp. 12-15 (1995) (citing other Kellermann studies); see also footnote 10 above.

    18 Arthur L. Kellermann, et al., Gun Ownership as a Risk Factor for Homicide in the Home, 329 New Engl. J. Med. 1084 (Oct. 7, 1993) (referred to below as “Kellermann Study”)

    19 Id. at 1084.

    20 Id. at 1084-85.

    21 Id. at 1088 (condensed from Table 3).

    22 Id. at 1084.

    23 Id. at 1084-85.

    24 Id. at 1088.

    25 Id. at 1084-85.

    26 Id. at 1085.

    27 Id. at 1085.

    28 Id. at 1085.

    29 Id. at 1085.

    30 Id. at 1085.

    31 For brevity and simplicity throughout this article, numbers usually will be rounded to the nearest whole number, and only the percentages are set forth — the actual counts are omitted. Due to rounding some percentages may not add up to exactly 100%.

    32 Id. at 1086.

    33 To be fair, the researchers did aggregate some of these smaller categories in the text of their article at page 1085.

    34 Id. at 1085.

    35 Charles C. Mann, Press Coverage: Leaving Out the Big Picture, 269 Science 166 (July 14, 1995).

    36 Darrell Huff, How to Lie with Statistics 46-47 (1954).

    37 Kellermann Study, supra note 18, at 1088.

    38 I have edited and abbreviated Kellermann’s Table 3 to make it less cumbersome, and I have added the letters in italics to help the reader refer to the table.

    39 Silman, supra note 8, at 16-18. Generally speaking, the “odds ratio” indicates the likelihood that a person who got the disease was exposed to the risk factor, as compared to the likelihood that a person without the disease was exposed to the risk factor. The odds ratio is used in retrospective studies to estimate the risk of getting the disease from exposure to a factor. There are standard methods of calculating the odds ratio for different types of epidemiological studies. Presumably the Kellermann researchers used the appropriate method in their study.

    40 Silman, supra note 8, at 17-18; Fletcher, supra note 16 at 197.

    41 Melissa Moore Thompson, Causal Inference in Epidemiology: Implications for Toxic Tort Litigation, 71 N. C. L. Rev. 247, 251-252 (1992) (citing authorities); but cf. Harland Austin et al., Limitations in the Application of Case-Control Methodology, 16 Epidemiologic Reviews 65, 66 (1994) (association is “weak” if the odds ratio is 1.5 or less).

    42 Gary Taubes, Epidemiology Faces Its Limits, 269 Science 164, 165 (July 14, 1995) (original italics).

    43 Id.

    44 Thompson, supra note 41, at 252 (citing Ernst L. Wynder, Guidelines to the Epidemiology of Weak Associations, 16 Preventive Med. 139 (1987).

    45 Silman, supra note 8, at 111-112. The 95% confidence interval is calculated using standard mathematical techniques. The technique used in a study depends upon the type of study and data. Presumably, the Kellermann researchers used the appropriate technique for their study.

    46 Silman, supra note 8, at 19.

    47 Silman, supra note 8, at 111-112.

    48 Silman, supra note 8, at 111-112.

    49 Kellermann Study, supra note 18, at 1089 (Table 4: “Variables Included in the Final Conditional Logistic-Regression Model Derived from Data on 316 Matched Pairs of Case Subjects and Controls.”)

    50 See Huff, supra note 36, at 45-49.

    51 Kellermann Study, supra note 18, at 1088 – 1089. The article also indicated the researchers’ attempts to limit the effects of these biases, although it admitted several biases were difficult to counteract.

    52 Id. at 1089.

    53 Id.

    54 Id.

    55 Id.

    56 “Confounding” occurs when the study results are affected by a third factor which might independently influence the cause and effect relationship. A confounding factor can mask, diminish, reverse, or exaggerate an association. Joseph H. Abramson, Making Sense of Data 44-46, 61-62 (2d ed. 1994).

    57 Kellermann Study, supra note 18, at 1089.

    58 G. Taubes, Epidemiology Faces Its Limits, 269 Science 164 – 169 (July 14, 1995); C. C. Mann, Press Coverage: Leaving Out the Big Picture, 269 Science 166 (July 14, 1995).

    59 Abramson, supra note 56, at 301-302 (some of the definition language in this section comes directly from the Abramson book); see also Thompson, supra note 41, at 268-274; Richard W. Stevens, Attacking Causation Evidence in Power Line EMF Injury Claims, 44 Fed’n Ins. & Corp. Couns. Qtly 199-209 (Winter 1994).

    60 See note 14 supra and associated text.

    61 Instead of being risk factors, these items might be “risk markers.” They might be indicators that there is a risk of eventual homicide, without actually being causing agents themselves. Abramson, supra note 56, at 221-223.

    62 See Lynne Lamberg, Medical News & Perspectives: Prediction of Violence Both Art and Science, 275 JAMA 1712 (June 12, 1996) (recommending that physicians should “screen all patients for such personality traits associated with violence as impulsivity, low frustration tolerance, and an inability to tolerate criticism.”) Lynne Lamberg, Medical News & Perspectives: Kids Who Kill: Nature Plus (Lack of) Nurture, 275 JAMA 1712- 1713 (June 12, 1996) (innate and early behavioral characteristics of children who later become violent offenders).

    63 Thomas B. Cole, Medical News & Perspectives: Franklin E. Zimring on Law and Firearms, 275 JAMA 1709 (June 12, 1996) (“[T]here are conflict-prone people, just as there are accident-prone people”).

    64 Silman, supra note 8, at 19-20.

    65 Abramson, supra note 56, at 95, 164-65 (one variant is known formally as “Berksonian bias”).

    66 Id. at 95, 277-278.

    67 Kellermann Study, supra note 18, at 1085: “[A]n adult (a person 18 years or older) in the first household with a member who met the matching criteria was offered a $10 incentive and asked to provide an interview.”

    68 Id. at 1086.

    69 Id. at 1089.

    70 Abramson, supra note 56, at 111, 278.

    71 Id. at 110, 164.

    72 Id. at 277.

    73 Kellermann Study, supra note 18, at 1085.

    74 Ernst L. Wynder, et. al., The Wish Bias, 43 J. Clin. Epidemiology 619 (1990).

    75 Id.

    76 Austin, supra note 11, at 71.

    77 “[U]nder conditions of high motivation, we often commit a memory error called confabulation; if unable to retrieve a certain item from memory, we manufacture something else that seems appropriate.” Psychology Today: An Introduction 178 (2d ed. 1972) (original emphasis).

    78 Huff, supra note 36, at 21-23; Abramson, supra note 56, at 94.

    79 Kellermann Study, supra note 18, at 1088.

    80 See Winston W. Little, et al., Applied Logic 210, 237 (1955); Huff, supra note 36, at 37-41.

    81 David E. Lilienfeld & Paul D. Stolley, Foundations of Epidemiology 288 (3d ed. 1994).

    82 Little, supra note 80, at 237.

    83 Little, supra note 80, at 6, 8, 238. For a detailed discussion of statistical fallacies of this sort, see Mervyn Susser, Causal Thinking in the Health Sciences 60 (1973).

    84 Kellermann Study, supra note 18, at 1090.

    85 Id. at 1084.

    86 Arthur L. Kellermann and Donald T. Reay, Protection or Peril? An Analysis of Firearm-Related Deaths in the Home, 314 New England J. Med. 1557 (1986) (the “Kellermann & Reay study”).

    87 Id. at 1560.

    88 Id. at 1557.

    89 Id. at 1557.

    90 Id. at 1557.

    91 Id. at 1557.

    92 Id. at 1557.

    93 Id. at 1557-58.

    94 Id. at 1559.

    95 Id. at 1558-1559.

    96 Id. at 1558.

    97 Id. at 1558.

    98 Id. at 1558.

    99 Id. at 1558.

    100 See notes 45-48, supra, and accompanying text.

    101 Kellerman & Reay study, supra note 86, at 1559.

    102 Id. at 1559 (first paragraph).

    103 See notes 45-48, supra, and accompanying text.

    104 Kellerman & Reay study, supra note 86 at 1559.

    105 Webster’s New Universal Unabridged Dictionary 1565 (2d ed. 1983).

    106 Abramson, supra note 56, at 199.

    107 See S. Morris Engel, With Good Reason: An Introduction to Informal Fallacies 59-62 (1976).

    108 Kellerman & Reay study, supra note 86, at 1559.

    109 Id.

    110 Id.

    111 Id.

    112 Id.

    113 Id. at 1559.

    114 Id. at 1558 (small print, left column).

    115 Id. at 1558 (emphasis added).

    116 Id. at 1559 (emphasis added).

    117 Id. at 1558 (small print, left column).

    118 See Abramson, supra note 56, at 100 – 102; see also Thomas Sowell, The Vision of the Annointed 48-51, 61 (1995) (statistical and logical errors arising from ignoring changes in study population over time).

    119 A. J. Silman, supra note 8, at 19.

    120 For a discussion, on a global scale, of how unarmed citizens face a greater risk of death by genocide than do armed citizens, see Jay Simkin, Aaron Zelman & Alan M. Rice, Lethal Laws: Gun Control is the Key to Genocide (1994).

    Reprinted with permission of Jews For The Preservation of Firearms Ownership .