Issues and Implications

The Uses and Abuses of Science In Sexual and Reproductive Health Policy Debates

Adam Sonfield, Guttmacher Institute
Reproductive rights are under attack. Will you help us fight back with facts?

First published online:

In making the case for particular policies, advocates and policymakers in decades past were often content to ignore, or even denigrate, science. Today, however, research findings are cited by almost everyone to buttress a political position. Yet, this has led to new problems. Whether the subject is the teaching of evolution in public schools, the public health consequences of pollution or the viability of missile defense systems, polarization over what a given study says and controversy over whether research is being applied appropriately to the policy-making process have become commonplace. Making one’s way through the resulting landscape of information, and deciding which findings are trustworthy, is becoming increasingly difficult.

On issues related to sexual and reproductive health, particularly around questions about abortion and teen or nonmarital sex, examples of such polarization and controversy abound.

In August 2005, for instance, the Journal of the American Medical Association (JAMA) published an article that concludes, after reviewing dozens of studies, that fetuses are unlikely to have developed the neurological connections and consciousness necessary to perceive pain before 29 or 30 weeks' gestation, well after all but an infinitesimal proportion of abortions are performed in the United States. The authors determined that signs of activity in fetuses and premature babies often cited as evidence that they perceive pain sooner are more likely to be reflex motions and hormonal responses that have also been seen among babies born without a brain and among adults in a vegetative state. Their recommendation was that discussions of fetal pain with women obtaining abortions before the third trimester of pregnancy should not be mandatory and fetal anesthesia should not be routinely offered, because it may pose risks for the woman. Antiabortion advocates swiftly denounced the study and vowed to continue their push for legislation in Congress and in the states that would do exactly the opposite; four states already have enacted such legislation.

Another recent example involved competing studies on the effectiveness of virginity pledges in protecting young adults against sexually transmitted infections (STIs). Yale University's Hannah Brückner and Columbia University's Peter Bearman, in an article published in March in the Journal of Adolescent Medicine, looked at data from urine tests for several STIs and found that adolescents who had pledged to abstain from sex until marriage had STI rates as young adults that were no different, statistically speaking, than those of nonpledgers. The Heritage Foundation's Robert Rector and Kirk Johnson countered in June with two papers presented at a conference on welfare policy that argued otherwise, citing several other measures based on the young adults' own reports about whether they had ever been infected or diagnosed with an STI. The research from both teams has been cited in policy debates over sex education programs and funding, because virginity pledge programs are seen as an important example of an approach that emphasizes abstinence as the only acceptable behavior for unmarried people.

One of the most familiar examples of the intersection of science and abortion politics is the theoretical link between abortion and breast cancer. The possibility of a connection has been studied extensively for several decades, but until the mid-1990s, the evidence had been inconsistent. Abortion opponents seized upon a 1996 analysis that, by combining the results of multiple studies, concluded that abortion increased a women's risk of breast cancer by 30%. Other researchers and major medical groups emphasized that these studies all suffered from the same key flaw and cited a new generation of studies that have consistently failed to find a link. Exhaustive reviews published in 2003 and 2004 by panels convened by the U.S. and British governments reconfirmed that the evidence does not support such a connection. Nevertheless, many abortion opponents continue to rely on the discredited studies to support public education campaigns and to justify legislation—already law in three states—requiring that women be told about the supposed link when seeking an abortion.

Why Research Is Complicated

In each of these examples, it may be understandable that policymakers, the media and the general public have some difficulty sorting out what they should believe. The methodologies used by many researchers are complex—and necessarily so. Over hundreds of years, scientists have developed methods for appropriately asking and answering important questions, methods designed to overcome substantial difficulties that have the potential to lead to incorrect answers.

In the first place, scientists need to be assured that their research is measuring what they want to be measuring. That can be far more difficult than it seems. A typical approach when studying human behavior is to question people directly. Yet, people may not know the answer, may be misinformed or may even lie, especially when asked about issues as sensitive as abortion or STIs. In fact, close to half of women do not report abortions on surveys, making it difficult to draw valid associations between abortion and any outcome, positive or negative. In other cases, such as the issue of fetal pain, the questions cannot be asked directly, and researchers have instead relied upon observation and their ever-improving knowledge of fetal development.

Moreover, some groups of people may be particularly likely to answer incorrectly. People who have taken a virginity pledge may be unwilling to admit to having an STI, or even to be tested for one. That is why Brückner and Bearman relied upon urine tests to measure STI rates, rather than the self-report measures used by Rector and Johnson. In contrast, women who have breast cancer—and may be seeking answers for why—may also be unusually likely to report a previous abortion. The central flaw of the earlier studies on abortion and breast cancer was comparing the self-reported abortion histories of healthy women with those of women with cancer. The more recent studies addressed that flaw by using data about women's abortion history that could not be biased by knowledge of cancer—for example, directly from their medical records at the time of the abortion.

An even higher hurdle for researchers is making the case that one thing actually causes another. The "gold standard" for research seeking to prove causality is the randomized control trial: Researchers randomly assign some patients to receive a treatment and others a placebo; the randomness provides the best assurance that differences in outcomes between the two groups is the result of the treatment. This type of research is often impossible in the realm of reproductive health. Researchers, for example, cannot ethically assign some women to have an abortion and others to carry an unintended pregnancy to term, a process that would most effectively gauge which option carries the lesser risk of future physical or mental health problems.

Instead, researchers must rely on observational studies, matching up, for example, women who have had an abortion with women who gave birth. These studies must account for a range of risk factors (called "confounding factors") that may be more common among one group than the other and may be difficult or impossible to measure. A study may find that women with a history of abortion have higher rates of mental health problems or drug abuse later in life, for example, but that may be the case only because, collectively, those women have higher rates of preexisting health problems, childhood exposure to sexual abuse or a history of risk-taking behavior.

The Importance of Process

Good science has numerous built-in protections to demonstrate the accuracy of researchers’ findings and conclusions. These protections, which enable scientists to conduct research that can be fairly evaluated by their peers, also serve to reduce the chances that scientists’ personal biases distort their findings.

Published scientific research, when reputable, reflects these protections. It includes detailed descriptions of the research methodology, a transparency that enables other scientists to attempt to replicate the study and that allows readers to assess the study's design. It shows most or all of the data used to arrive at key conclusions, typically in tables and charts conforming to widely used standards. It provides sources for facts, ideas and studies that are used, and attempts to account for conclusions that differ from prior research.

One particularly important sign of a study's quality is where it is published. Most prestigious are scholarly journals, which are often run by professional associations that establish research standards in their field. These journals rely on both the professional judgment of their editorial staff and on a peer-review process that, in some cases, is "blinded"—neither the reviewers nor the authors are identified to each other. This process allows several independent reviewers to gauge the quality of a study on its methodology and logic. Many other reports are published in the form of "white papers" or monographs directly by the institutions that conducted or supported the research; if reputable, these reports provide readers with ample detail to evaluate the research, and the more rigorous rely upon and acknowledge the aid of external reviewers.

Notably, Rector and Johnson's papers on virginity pledges were released with great fanfare without being published in a peer-reviewed journal or disclosing any form of outside review. Releasing a preliminary study at a conference is nothing unusual; it is a standard way for researchers to get feedback that can help make a study better suited for publication. Yet, the Heritage researchers have not deemed their studies preliminary and have sought and garnered substantial media attention for their findings. A number of independent researchers (along with Bearman) urged the authors to submit their papers for peer review so that the papers could be revised to a standard suitable for publication in a respected journal.

Far too often in the uproar over sexual and reproductive health issues, the protections built into the scientific process are simply ignored by advocates opposed to a given study's findings. In the case of the JAMA fetal pain study, antiabortion activists focused almost exclusively on the "bias" of two of the study's five authors, asserting that the study was inherently tainted. The lead author, now a medical student, reportedly worked for eight months in 1999-2000 as a lawyer for NARAL Pro-Choice America. Another author, an academic and obstetrician-gynecologist, serves as medical director of a clinic that provides abortion services, and has performed abortions herself. The most extreme critics used this "evidence" to attack JAMA and its editor-in-chief as well.

Without question, reputable published science should tell readers about potential conflicts of interest. That obligation is generally viewed narrowly, however—encompassing an author's employer and financial ties, including funding for the research, but not political affiliation. In response to the fetal-pain controversy, some researchers and journal editors asserted that—at least for research tied to an issue as explosive as abortion—disclosing these other ties would have been prudent, if only to help fend off a predictable controversy. Indeed, JAMA's editor asserted that she would have published the NARAL affiliation if she had known about it. Her primary response to the controversy, however, was to defend the integrity of the scientific process, and to emphasize that because the review met the journal's standards for quality, it would have been published regardless.

In the virginity pledge debate, Rector and Johnson acted in some ways more like advocates than researchers, attributing differences in research methodology to a perceived ideological agenda. They went so far as to say that Brückner and Bearman "mislead the press and public" and had conducted "junk science." In doing so, they failed to give due deference to the stated, scientific reasons for why Brückner and Bearman did things differently, such as relying on urine tests rather than questions likely to elicit biased and inaccurate answers.

The Weight of the Evidence

There are no guarantees, of course, that even the most rigorous study in the most prestigious journal is correct in its conclusions. Science progresses by accumulating evidence from multiple studies, a key reason why transparency and replicability are vital. Moreover, science advances: Over time, scientists develop more refined methods, acquire more appropriate data and explore new explanations for old mysteries.

As a community, scientists look at the available evidence, evaluate its quality and come to a consensus about what is most likely to be true. They do this by submitting letters to journals, discussing research at conferences, testing competing theories, conducting literature reviews (such as the fetal pain study), combining and reanalyzing data from multiple studies (in so-called meta-analyses) and participating in consensus panels organized by professional associations, research institutions and government agencies (as in the debate over abortion and breast cancer). Even then, a new study could come along to shake that consensus, and conclusive proof that one thing causes another is rare.

Nevertheless, the conduct and widespread dissemination of policy-relevant research—at least potentially—permits a society to benefit from "evidence-based" policy making, which can be a momentous improvement over making important decisions based solely on ideology or emotion. It also can present serious problems, however, if the various key actors do not uphold the distinctive obligations of their given professions. To be sure, researchers provide valuable assistance to the policy-making process when they provide evidence that they and others can use to advocate for specific policies; yet, they have an obligation to act first as scientists, cognizant of the limitations of the data with which they are working and unwilling to interpret those data in ways that are influenced unduly by their personal ideology. Journalists who write about research findings have an obligation to educate themselves on the subjects they cover, to be careful about how they cover preliminary and unpublished research and to recognize that science works through consensus, not polarization; in contrast to what might be considered traditional conventions of journalistic balance, when covering research, there are not always two points of view worthy of equal deference.

For their part, most advocates and policymakers may never be able to understand all of the intricacies of a particular research study. They, too, can increase their scientific savvy, and they can take care not to exaggerate the implications of research—a temptation for partisans of any position. Yet, they generally must rely upon sources they trust to help them know what findings to believe. What may be most important for them is to understand that they must put their trust in individuals and organizations not primarily because they have a particular ideology, but because they have a track record of responsible research and analysis.