Securing Life Sciences Research in an Age of Terrorism

Bioscientists must work closely with the government to minimize the possibility that vitally important research could be misused to threaten public health or national security.

The anthrax attacks that closely followed the 9/11 terrorist attacks helped create a sense that danger was everywhere. They also helped create a crisis for science. Government statements that the anthrax attacks had most likely been carried out by a U.S.-based microbiologist who had obtained the deadly Ames strain of Bacillus anthracis from a culture collection raised serious concerns about the security of potentially dangerous biological agents as well as the trustworthiness of scientists. The U.S. public began to take a dual view of the scientific community: capable of doing both great good (lifesaving medical treatments) and great harm (research that could be abused by terrorists).

In the five years since the attacks, the government has taken a number of sound steps to prevent the acquisition by terrorists of dangerous pathogens and toxins that could be used against civilian populations. It has also begun to deal with the more complex issue of how to secure new knowledge that could be misused to cause harm. In taking these actions, however, the government has raised fears among scientists that national security concerns will undermine scientific progress. So far, these fears have proven unfounded. Instead of a heavy-handed approach, the government has in large part supported a system of voluntary self-governance by scientists. Despite this, a significant part of the scientific community continues to oppose the additional government scrutiny for reasons ranging from a belief that it will not be effective to the larger concern that it could inhibit or even preclude scientific research that could lead to effective countermeasures against potentially dangerous bioagents.

But the potential for a more-intrusive government is not going away; indeed, there is a possibility, at least in the United States, that the government could become much more actively involved. We in the scientific community must work to ensure that reasonable laws are implemented and that the regulations imposed to date are fully complied with so that the government does not enact a regulatory system that isolates the U.S. life sciences community from its critical international collaborators. Most importantly, we must ensure that the self-governance structure works globally so that everything possible is done to minimize the risk that vitally important biological work could be misused to threaten public health or national security.

In the wake of the anthrax attacks, the government greatly increased funding for biodefense programs in a number of agencies. For example, the biodefense research budget at the National Institutes of Health (NIH) rose from $25 million in 2001 to about $1.5 billion in 2002 and is now being sus-tained at about $1.7 billion per year. Some of the funding is being spent to build the infrastructure needed to study infectious diseases, including highly secure laboratories. Other funds are being used to create a stockpile of vaccines and therapeutics.

Some critics have called the increased spending excessive, arguing that a biological attack is unlikely. Even if these critics were correct, this spending would still not be wasteful. In fact, it’s likely to be highly beneficial, because much biodefense research complements the research that is being conducted to defend against naturally occurring infectious diseases. Biodefense research will help advance overall understanding of pathogenesis and the immune response, leading to the dual benefits of improved national security and enhanced global health. It is also highly likely that we will be better prepared to deal with a pandemic influenza as a result of investments in biodefense research. Because of the susceptibility of the world’s populations to infectious diseases and the relative ease of bioterrorism, a continued investment in biodefense R&D is sound national policy.

Government faces a fundamental problem in trying to prevent dangerous biological agents from being acquired by terrorists: With the exception of smallpox virus, all other pathogenic microorganisms that are potential threats occur in nature. For example, anthrax occurs in African wildlife, and plague-causing bacteria are endemic to many animals, occasionally causing human infections. Cultures of these organisms are stored for research purposes in laboratories around the world, and there is always the possibility that they could be diverted for harmful purposes. Because it is impossible to eliminate these potential threats, the United States has sought to enhance the security of facilities that store them and to carefully control who has access to agents in quantities that can be used for bioterrorism.

Even before the anthrax attacks, there was a clear need to control access to dangerous human pathogens. In 1995, a member of the radical Aryan Nation tried to obtain Yersinia pestis, the bacterium that causes plague, from the American Type Culture Collection. At that time, there were no laws in the United States controlling the possession or domestic acquisition of dangerous pathogens, and for the most part, there was an openness and unquestioning willingness to share biological agents. As a result of the 1995 incident, practices for supplying microbial cultures to the research community began to change, with some collections refusing to supply potentially dangerous agents to anyone.

After the 2001 anthrax attacks, the federal government sought to establish a formal oversight system for the possession of microorganisms and toxins that could be used as weapons. Appropriately defined, such a system should ensure that pathogens and toxins would be available to legitimate researchers and denied to those who should not possess agents that could do mass harm. Initially, proposals surfaced that would have banned all foreigners from possessing the most dangerous pathogens in laboratories within the United States. Fortunately, such proposals were abandoned when it was recognized that such actions would isolate U.S. life scientists and impede collaborative research aimed at controlling infectious diseases and reducing the threat of bioterrorism. The aim of material-control efforts for the life sciences soon focused on ways to ensure that any individual given access to select agents was trustworthy and that these agents would be secure within each facility that housed them.

The USA Patriot Act of 2001 established restrictions on the possession of select agents only for people from countries designated as supporting terrorism and for individuals who are not permitted to purchase handguns. The act also made it a crime for a person to knowingly possess any biological agent, toxin, or delivery system of a type or in a quantity that, under the circumstances, is not reasonably justified by prophylactic, protective, bona fide research or other peaceful purpose. Even though terrorists may still be able to obtain biothreat agents from nature and from sources outside the United States, the prohibitions of the Patriot Act are appropriate measures that contribute to a web of protection. They are, however, subject to subjective determinations (for example, how to define what constitutes a bona fide reason for possessing a select agent) and hence are viewed with concern by some who worry about government intrusions into civil liberties and freedom of inquiry.

The provisions of the Patriot Act were subsequently incorporated into the Public Health Security and Bioterrorism Response Act, known as the Bioterrorism Act of 2002. This act added requirements for regulations governing the possession of select agents, including clearances by law enforcement. Clinical laboratories were granted a critical special exemption to permit them to legally isolate and identify (and thereby possess) select agents cultured from patients as part of the medical diagnostic process, even if they were not registered to possess select agents. This is essential for medical diagnoses in cases in which there is no way to predict what disease a patient might have, thereby precluding the ability to register for specific select agents. However, the clinical laboratories are mandated to destroy any select agents or transfer them to a registered laboratory that is permitted to possess that select agent within a few days, and they must also notify public health authorities whenever a select agent has been isolated and identified.

TO PREVENT FUTURE BIOTERRORISM ATTACKS, WE WILL NEED TO GO BEYOND THE ISSUE OF HOW TO PREVENT TERRORISTS FROM OBTAINING DANGEROUS PATHOGENS AND TOXINS TO THE FAR MORE COMPLEX ISSUE OF THE SECURITY OF NEW KNOWLEDGE THAT COULD BE USED TO CAUSE HARM.

Because of the global distribution of pathogens that could be used for bioterrorism or biowarfare, it is also critical that other countries adopt appropriate regulatory oversight measures to ensure that individuals with access to agents that could be used for acts of bioterrorism are deemed trustworthy and that the agents are protected from potential misuse. The World Health Organization (WHO) has expanded its guidance for nations around the world to include biological security issues: “Security precautions should become a routine part of laboratory work, just as have aseptic techniques and other safe microbiological practices,” according to the WHO statement. “Laboratory biosecurity measures should not hinder the efficient sharing of reference materials, clinical and epidemiological specimens and related information necessary for clinical or public health investigations. Competent security management should not unduly interfere with the day-to-day activities of scientific personnel or be an impediment to conducting research. Legitimate access to important research and clinical materials must be preserved. Assessment of the suitability of personnel, security-specific training and rigorous adherence to pathogen protection procedures are reasonable means of enhancing laboratory biosecurity. Checks for compliance with these procedures, with clear instructions on roles, responsibilities and remedial actions, should be integral to laboratory biosecurity programs and national standards for laboratory biosecurity.” In essence, WHO has properly declared that the world’s culture collections (biological resource centers) must not become mail-order sources for biothreat agents and that diagnostic laboratories must not carelessly become a source of bioweapons for terrorists.

As a result of the new U.S. laws, biological resource centers and individual scientists are now facing new scrutiny and regulatory requirements that limit their ability to supply certain microorganisms to research, educational, and domestic laboratories. Many members of the public and those in the security community see this oversight as prudent and a critical national security measure. Many in the scientific community agree that it is a necessary measure to retain the public trust. Others, though, feel that this scrutiny is ineffective and of little value, given that most of the pathogens of concern can be isolated from natural sources or are found in laboratories in developing countries that cannot afford biosecurity measures. Some in the scientific community believe that the imposition of biosecurity measures and the legal requirements imposed on the possession of select agents will inhibit the biomedical research needed for true biodefense.

In particular, many scientists object to the screening of researchers by the FBI that is now required in the United States in order to possess select agents. A few researchers have chosen to destroy their cultures of select agents rather than submit to the screening. There are also concerns that the restrictions will limit international collaborations in areas of critical biomedical research. NIH now requires that collaborative international projects meet the requirements of the select-agent regulations, including personnel screening and laboratory physical security and inspections.

The concern about potentially excessive regulations is heightened by the recognition that attempts to contain cultures of dangerous pathogens within biological resource centers will at best be imperfect, especially as advances are made in biotechnology. With synthetic biology, it is now at least theoretically possible to create virulent pathogenic viruses, including Marburg and Ebola. Although the select-agent regulations should be applicable to such genetic constructs, synthetic biology increases the possibility that terrorists could obtain bioweapons. According to an advisory committee of NIH, “Individuals versed in, and equipped for routine methods in molecular biology can use readily available starting material and procedures to derive some select agents de novo. It is now feasible to produce synthetic genomes that encode novel and taxonomically unclassified agents with properties equivalent to, or ‘worse’ than, current select agents. The rapid rate of scientific and technological advancements outpaces the development of list-based regulations, whereas policies and best practices can more readily accommodate new breakthroughs.” Recognizing the danger, scientists, including pioneers in the synthetic biology field, have been working to develop a system of voluntary constraints.

Such voluntary systems of responsible conduct within the scientific community and commercial sector are critical components of the protective web against bioterrorism. This is especially true as the biothreats move beyond those pathogens and toxins that can be listed and subjected to regulatory controls. As pointed out in the 2006 National Research Council (NRC) report Globalization, Biosecurity, and the Future of the Life Sciences, “[We need to] recognize the limitations inherent in any agent-specific threat list and consider instead the intrinsic properties of pathogens and toxins that render them a threat and how such properties have been or could be manipulated by evolving technologies.” A broad array of mutually reinforcing actions, implemented in a manner that engages multiple stakeholder communities, will be required to reduce the biological threats facing society. To stay ahead of future biothreats, we need to enhance the scientific and technical expertise within the intelligence community and position the scientific and public health communities to respond appropriately.

To prevent future bioterrorism attacks, we will need to go beyond the issue of how to prevent terrorists from acquiring dangerous pathogens and toxins to the far more complex issue of the security of new knowledge that could be misused to cause harm. The fear that information from life science research might fall into the wrong hands has raised a great deal of anxiety within the scientific community and uncertainties among the public and policymakers as to how to balance national security with traditional scientific openness. We know now, for example, that the former biowarfare programs in Iraq and South Africa relied heavily on open-source literature to develop biological weapons. Both programs recruited some of the brightest graduate students in several different scientific fields and sent them to western universities for advanced studies, thereby subverting the graduate educational enterprise. There is legitimate concern that al Qaeda and other terrorist organizations or individuals could do the same, using the openly available scientific information available to them to plan and carry out a major bioterrorism attack.

Since the anthrax attacks, the most contentious debate has been over possible constraints on the use of unclassified information in the life sciences. Concerns were raised in the media about whether scientists were acting responsibly by openly providing information, especially in electronic formats, that could be misused. Some within the scientific community sought to eliminate the methods sections of the published literature so that experiments could not be repeated. Some openly asked whether there were areas of life sciences research that should be prohibited and/or specific scientific information that should not be communicated.

Meanwhile, the government has increasingly been restricting what it calls “sensitive but unclassified information.” Information has been removed from Web sites and from public access in libraries. Despite the fact that the government still supports National Security Decision Directive 189, which states that research at academic institutions should be openly available to the maximum extent possible, some research contracts now permit the government to restrict the dissemination of information. Section 892 of the Homeland Security Act of 2002 requires the president to “prescribe and implement procedures under which relevant Federal agencies …identify and safeguard homeland security information that is sensitive but unclassified.”

To try to deal with the pros and cons of changing the nature of scientific publication and related issues of communication within the life sciences, the National Academies held a workshop in January 2003, which was attended by members of the academic, security, and publishing communities. At a meeting of journal editors and publishers organized by the American Society for Microbiology that immediately followed, a statement was drafted in which the participants pledged to deal “responsibly and effectively with safety and security issues that may be raised by papers submitted for publication.” It further stated: “We recognize that on occasions an editor may conclude that the potential harm of publication outweighs the potential societal benefits. Under such circumstances, the paper should be modified, or not be published.” Several journals went on to adopt formal review processes. Some viewed such processes as self-censorship. Others considered them responsible citizenship. The controversy, however, did not disappear, perhaps because of the lack of guidance as to what constituted “forbidden knowledge.”

The current situation is not the first time that life scientists have faced the challenge of assessing potential harm and the need to protect the public. The 1975 Asilomar conference, which was called to deal with concerns about recombinant DNA research, established the willingness of the scientific community to act responsibly to ensure that the emerging field of biotechnology did no harm. Responsibility for providing guidance to ensure the safe conduct of recombinant DNA research was subsequently assumed by NIH’s Recombinant DNA Advisory Committee (RAC). The NIH guidelines were designed to ensure that any danger associated with a recombinant molecule or organisms was contained within the laboratory. Knowledge generated as a result of such research was to be freely disseminated. In contrast, in the current case of the threat of bioterrorism, the concern is largely focused on whether the knowledge itself might be dangerous and subject to misuse.

A 2004 NRC report, Biotechnology Research in an Age of Terrorism, tackled the critical question of how to define what is dangerous and how to design a system that contains that danger while allowing legitimate biomedical research to proceed in a manner acceptable to society. The NRC committee, chaired by Gerald Fink of the Massachusetts Institute of Technology, concluded that the release of some information could indeed be dangerous but that the United States should rely on self-governance by the scientific community to reduce the potential misuse of legitimate scientific inquiry and communication. The committee proposed a multistage system under which scientists would review experiments and their results to provide public reassurance that advances in biotechnology with potential applications for bioterrorism or bioweapons development were receiving responsible oversight. In essence, the committee adopted a bottom-up approach aimed at helping reduce the threat of misuse of the life sciences by mobilizing the scientific community to police itself.

PEER PRESSURE WITHIN THE SCIENTIFIC COMMUNITY WILL BE NEEDED TO ENSURE FULL COMPLIANCE WITH ANTITERRORISM LAWS AND BIOSAFETY/BIOSECURITY REGULATIONS THAT ARE VIEWED BY SOME AS EXCESSIVELY RESTRICTIVE AND AN IMPEDIMENT TO LEGITIMATE SCIENCE.

In taking its self-governance approach, the Fink committee argued that government regulations work best when areas of concern can be objectively defined. Self-regulation was needed because one of the defining characteristics of potential dual-use research in biotechnology is the inability to set clear boundaries around the areas of concern. Because of this, the committee said, there is a need for extensive consultation between those trying to define the sphere of concern and those in the research community who can help figure out what is appropriate to be constrained. The committee recommended restrictions in only a very limited sphere of information in the life sciences and proposed a system modeled after the RAC to help guide the life sciences community.

The Fink committee concluded that research that would do any of the following would be a concern:

  • Demonstrate how to render a vaccine ineffective
  • Confer resistance to therapeutically useful antibiotics or antiviral agents
  • Enhance the virulence of a pathogen or render a non-pathogen virulent
  • Increase the transmissibility of a pathogen
  • Alter the host range of a pathogen
  • Enable evasion of diagnostic and detection modalities
  • Enable the weaponization of a biological agent or toxin

The report did not suggest that research in these areas should not be conducted. Rather, it concluded that most proposed studies should go forward in an open environment because they likely would lead to the creation of beneficial knowledge. The report recognized, however, that some experiments might be readily misused and that those that might provide a roadmap for bioterrorists should be conducted in a classified environment or not at all.

The committee’s proposed oversight system within the scientific community would have layers of filters that could help determine whether a particular study should be conducted and, if so, how the findings might best be published to avoid potential misuse. Journal editors and editorial boards would play a role in helping determine what information, if any, should not be published. Ultimately, the “secrecy” of information, though, would rest with the individual scientists who generated the data. Work in any of the areas where experiments might be of special concern would be subject to examination by institutional oversight committees to provide assurance that the benefits of that research would likely outweigh the dangers.

The type of system envisioned by the Fink committee has now been established at NIH. The National Science Advisory Board for Biosecurity (NSABB) will provide advice to federal departments and agencies on ways to minimize the possibility that knowledge and technologies emanating from vitally important biological research will be misused to threaten public health or national security. At the most general (strategic) level, the NSABB will serve as a point of continuing dialogue between the scientific and national security communities and as a forum for addressing issues of interest or concern. At the operational (tactical) level, the NSABB will provide case-specific advice on the oversight of research and the communication and dissemination of life sciences research information that is relevant for national security and biodefense purposes. The NSABB has embraced both of these roles by developing generic guidelines for the definition of research of concern and offering case-specific advice (for example, on whether the papers on the reconstruction of the 1918 influenza virus should be published).

An important step taken by the NSABB has been to establish a committee that will refine the definition of “dualuse research of concern.” This designation simply means that the research may warrant special consideration regarding conduct and oversight; it does not mean, a priori, that the work should not be performed or that the results should not be published. The committee has indicated that particular concern should be focused on research areas that are very similar to the Fink committee’s list. The NSABB dual-use criteria are process-based rather than including categories based on specific organisms. It decided to specify that we must be concerned about agricultural as well as human pathogens. It included a category on enhancing the susceptibility of a host population. Because of concerns about the development of synthetic biology, it also added the generation of a novel pathogenic agent or toxin or the reconstitution of an eradicated or extinct biological agent as a dual-use category. Interestingly, though, it chose to omit as a concern enabling the weaponization of a biological agent or toxin. The NSABB did so presumably because it defined dual use as legitimate science that could be misused and assumed that legitimate scientists would not be involved in weaponization activities.

Recognizing that global security is at stake, scientists and policymakers are now working to establish worldwide voluntary measures to prevent the misuse of life science research, including the adoption of a code of ethics. These steps would contribute to the web of deterrence against bioterrorism and avert government-imposed restrictions on information exchange that could be draconian in their impact.

A common theme of the discussions to develop an ethics code is: First, do no harm. Beyond that, it is proving difficult to achieve consensus. However, a statement on biosecurity by the Interacademy Panel (IAP), a global network of science academies, provides key principles:

Awareness. Scientists have an obligation to do no harm. They should always take into consideration the reasonably foreseeable consequences of their own activities. They should therefore always bear in mind the potential consequences— possibly harmful—of their research and recognize that individual good conscience does not justify ignoring the possible misuse of their scientific endeavor, and refuse to undertake research that has only harmful consequences for humankind.

Safety and security. Scientists working with agents such as pathogenic organisms or dangerous toxins have a responsibility to use good, safe, and secure laboratory procedures, whether codified by law or common practice.

Education and information. Scientists should be aware of, disseminate information about, and teach national and international laws and regulations, as well as policies and principles aimed at preventing the misuse of biological research.

Accountability. Scientists who become aware of activities that violate the Biological and Toxin Weapons Convention or international customary law should raise their concerns with appropriate people, authorities, and agencies.

Oversight. Scientists with responsibility for oversight of research or for the evaluation of projects or publications should promote adherence to these principles by those under their control, supervision, or evaluation and act as role models in this regard.

As recognized by the IAP, it is important that the scientific community assume responsibility for preventing the misuse of science and that it work with legal authorities, when appropriate, to achieve this end. Whistleblowing—the exposing of potential harms to authorities and/or the public—is an important ethical responsibility. Establishing a system of responsible authorities to whom concerns can be revealed and ensuring that whistleblowers can be protected from retribution, however, remain major challenges. Peer pressure within the scientific community will be needed to ensure full compliance with antiterrorism laws and biosafety/biosecurity regulations that are viewed by some as excessively restrictive and impeding legitimate science. Finally, further efforts to establish a culture of responsibility are needed to ensure fulfillment of the public trust and the fiduciary obligations it engenders to ensure that life sciences research is not used for bioterrorism or biowarfare.

Recommended Reading

  • National Research Council, Biotechnology Research in an Age of Terrorism (Washington, DC: National Academies Press, 2004) (http://darwin.nap.edu/books/0309089778/html).
  • National Research Council, Globalization, Biosecurity, and the Future of the Life Sciences (Washington, DC: National Academies Press, 2006) (http://darwin.nap.edu/books/0309100321/html/).
  • National Research Council, Making the Nation Safer: The Role of Science and Technology in Countering Terrorism (Washington, DC: National Academies Press, 2002) (http://darwin.nap.edu/books/0309084814/html/).
  • Office of Biotechnology Activities National Institutes of Health, National Science Advisory Board for Biosecurity (2006) ().
  • Royal Society and Wellcome Trust, Do No Harm: Reducing the Potential for the Misuse of Life Science Research (London: Royal Society, 2004) ().
  • D. Shea, Oversight of Dual-Use Biological Research: The National Science Advisory Board for Biosecurity (Washington, DC: Congressional Research Service, The Library of Congress, 2006) (http://www.fas.org/sgp/crs/natsec/RL33342.pdf).
  • M. Somerville and R.M. Atlas,“Ethics: A Weapon to Counter Bioterrorism,” Science 307 (2005): 1881–1882.
  • World Health Organization, Laboratory Biosafety Manual, Third Edition (Geneva: World Health Organization, 2004) (http://www.who.int/csr/resources/publications/biosafety/en/Biosafety7.pdf).
Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Atlas, Ronald M. “Securing Life Sciences Research in an Age of Terrorism.” Issues in Science and Technology 23, no. 1 (Fall 2006).

Vol. XXIII, No. 1, Fall 2006