Using University Knowledge to Defend the Country

Everyone understands that the United States will need new ideas to meet the threat of terrorism, and indeed, history shows the way. Seventy years ago, the country’s scholars ransacked their respective disciplines for the ideas that won World War II. Academic ideas continued to produce key technologies, including hydrogen bombs and intercontinental ballistic missiles, well into the Cold War. Much work was done through the National Academies, most notably when a National Research Council report persuaded the Navy to launch its massive Polaris submarine program.

Still, that was a long time ago. How well is government using today’s academic insights to fight terrorism? Three years ago I asked 30 specialists to review what academic research has to say for a comprehensive volume entitled WMD Terrorism: Science and Policy Choices (MIT Press, 2009). As expected, we found a large, insightful body of literature, much of which drew on disciplines, from nuclear physics to game theory, that government could never have sorted out for itself. The really striking thing, however, was how often U.S. policy failed to reflect mainstream academic insights.

There are a variety of instances in which homeland security policy seems to be at odds with mainstream academic research. I will focus on three: Managing public behavior after an attack with weapons of mass destruction (WMD), mitigating economic damages from a so-called dirty bomb, and designing cost-effective R&D incentives. Although these examples are important in their own right, they also point to a more systematic problem in the way federal agencies fund and use homeland security research.

Managing public behavior after a WMD attack. Washington insiders routinely claim that certain attacks on the U.S. homeland—the setting off of improvised explosive devices, assaults by suicide bombers, or the release of radioactive isotopes from a dirty bomb—would cause psychological damage out of all proportion to any physical impact. For example, Washington counterterrorism consultant Steven Emerson told CNBC in 2002 that a dirty bomb that killed no one at all would trigger “absolutely enormous” panic, deal “a devastating blow to the economy,” and “supersede even the 9/11 attacks.” Training scenarios such as the U.S. government’s Topoff exercises are similarly preoccupied with containing public panic. According to an article in the March 14, 2003, Journal of Higher Education, the first Topoff exercise featured widespread civil unrest after a biological attack and National Guard troops shooting desperate civilians at an antidote distribution center. As Monica Schoch-Spana of Johns Hopkins University remarked, government exercises frequently stress this image of a “hysterical, proneto-violence” public.

The cultural roots of this expectation are deep. Indeed, scenes of rioting and chaos go back to the world-ending plague depicted in Mary Shelley’s 1822 novel The Last Man. Since then, the theme has become a reliable stock ingredient for 20th-century science fiction, from Philip Wylie’s Cold War–era nuclear holocaust novel Tomorrow to Richard Preston’s 1998 bioterrorism novel The Cobra Event. From the beginning, governments have shown a distinct readiness to believe such scenarios. Indeed, British Prime Minister Stanley Baldwin was already persuaded in the 1930s that public panic after air raids would deal a “knockout blow” to modern societies.

But is any of this true? Mary Shelley’s predictions notwithstanding, academics who study real disasters have repeatedly found that the public almost never panics. Confronted with this evidence, today’s homeland security officials usually argue that WMD are different. But this too is wrong. Social scientists know a great deal about how civilian populations have responded to the use of WMD, including, for example, atomic weapons at Hiroshima, wartime firestorms in Germany, and the 1918 influenza pandemic. In all of these cases, victims were overwhelmingly calm and orderly and even risked their lives for strangers. Indeed, antisocial behaviors such as crime often declined after attacks. Furthermore, social psychology studies of public attitudes toward various WMD agents, particularly radioactivity, since the 1970s have further reinforced the conclusion that the public would remain calm. The problem, so far, has been persuading authorities to listen. When researchers such as Carnegie Mellon University’s Baruch Fischhoff complain that U.S. policymakers are “deliberately ignoring behavioral research” and “preferring hunches to science,” their frustration seems palpable.

Facts, as Ronald Reagan (and John Adams before him) used to say, “are stubborn things.” Here, alas, facts have not been stubborn enough. It is bad enough for government officials to waste time and money on exercises that simulate a myth. The deeper problem is that such myths can become self-fulfilling. Cooperation, after all, requires a certain faith in others. Without that, even rational people will eventually decide that every man for himself is the right strategy. U.S. leaders have traditionally fought such outcomes by reminding Americans that the real enemy is fear itself, that communities are strong and can be trusted to work together. The problem is that too many officials don’t really believe this. As University of Colorado sociologist Kathleen Tierney has remarked, New Orleans’ citizens met Hurricane Katrina with courage and goodwill. This, however, did not prevent fragmentary and mostly wrong initial reports of looting from prompting a debilitating “elite panic.” The result was a hasty shoot-to-kill policy and, even worse, a systematic reallocation of government effort away from badly needed rescue operations to patrolling quiet neighborhoods. As things stand, federal leaders could easily end up repeating this error on a larger scale.

Mitigating the economic impact of dirty bombs. Almost by definition, terrorism requires weapons that have consequences far beyond their physical effects. Here, the archetypal example is a dirty bomb that spreads small amounts of radioactivity over a wide area. In this situation, medical casualties would probably be minimal. Instead, the main repercussions would involve cleanup costs and, especially, the economic ripple effects generated by government-ordered evacuations.

Eight years after September 11, economists know a great deal about how a dirty bomb would affect the economy. For example, University of Southern California Professors James Moore, Peter Gordon, and Harry Richardson and their collaborators have performed detailed simulations of dirty bomb attacks on Los Angeles’ ports and downtown office buildings. Depending on the scenario, they found that losses from business interruptions, declining property values, and so forth would range from $5 billion to $40 billion. Their models, however, are extremely sensitive to estimates of how long specific assets, notably freeways, remain closed. In practice, Moore et al. assume that first responders will ignore these economic effects and indiscriminately close all facilities within a predefined radius. Politically, this seems like a sensible prediction. But is it good policy? Government security experts frequently joke that dirty bombs are mainly effective as “weapons of mass disruption.” If so, it would make sense to use Moore et al.’s models to evaluate different evacuation plans that balance economic damage against increased medical risk. This could be done, for example, by allowing port workers and truck drivers to operate critical assets for limited periods of time so that critical parts, for example, continue flowing to distant factories. Ten years ago, authorities had no practical way to know which assets to keep open. Moore et al.’s work potentially changes this. The problem, for now, is that most public officials don’t know about it.

There is also a deeper problem: The power of dirty bombs comes from Americans’ fear of radiation. Yet many scholars argue that this fear is exaggerated. If so, government should be able to educate citizens so that dirty bombs become less frightening and also less appealing to terrorists in the first place. This, of course, is a notoriously hard problem. Thirty years ago, the nuclear power industry hired the country’s best lawyers and public relations firms to explain radiation risk and failed miserably. But that was before social psychologists developed sophisticated new insights into the public’s often subconscious response to nuclear fear. Many scholars, notably including University of Oregon psychologist Paul Slovik, think that new communication campaigns that targeted these “mental models” could significantly reduce public anxiety. So far, however, homeland security agencies have shown only minimal interest.

THE BASIC PROBLEM IS THAT HOMELAND SECURITY AGENCIES HAVE FALLEN INTO THE HABIT OF FRAMING RESEARCH QUESTIONS FIRST AND ONLY LATER ASKING FOR ADVICE.

Designing government R&D incentives. Everyone recognizes that defending the United States will require dozens of new technologies. So far, though, the record has been discouraging. Five years ago, Congress appropriated $5.6 billion for a “Bioshield” program aimed at developing new vaccines to fight biological weapons. Despite this, almost nothing has happened. Beltway pundits like to explain this failure by saying that drug discovery is expensive and more money is needed. This, however, cannot be squared with reliable estimates that put per-drug discovery costs at just $800 million. Instead, the real problem seems to have been an attempt by Congress to micromanage how the money was spent by, for example, specifying that procurement agencies could not sign contracts to buy drugs that were still under development. This, however, meant asking drug companies to invest in R&D without any assurance that the government would offer a per-unit price that covered their investment. Economists have long known that such arrangements can and do give government negotiators enormous bargaining leverage. Indeed, drug companies can even end up manufacturing drugs at prices that fail to cover their R&D costs. Although this might sound like a good deal for the taxpayer, drug companies were too smart to invest in the first place.

Predictably, Congress has since passed a second statute (originally Bioshield II, now the Pandemic and All Hazards Preparedness Act of 2006) that offers “milestone” payments so that companies are no longer required to make all of their R&D investments up front. This, however, creates a much more fundamental problem. When Bioshield II was first being debated, Sen. Joe Lieberman (I-CT) told his colleagues that even the new incentives might not be sufficient. Instead, Lieberman warned that “only industry can give us a clear answer to these questions,” and this would require a process of “government listening and industry speaking.” Of course, Lieberman must have known that such a process would be like asking a used car salesman how much money he “needed” to close a deal. The problem was that neither Lieberman nor anyone else in Congress knew much about drug development costs. If you don’t know how much something costs, what price should you offer?

Congress never really answered this question. Instead, it outsourced Lieberman’s dilemma to the executive branch by creating a new expert body, the Biomedical Advanced Research and Development Agency (BARDA), with broad powers to design whatever incentives seemed best under the circumstances. What Congress did not explain, of course, is exactly how BARDA should go about choosing between, say, offering drugmakers a prize for the best vaccine and hiring them to do contract research. The obvious Washington temptation, of course, would be for BARDA to treat the decision as an ideological fight with, say, Reaganites favoring prizes and Obamaites calling for government-funded research. No private-sector firm would do business this way. Its shareholders could and would demand that their CEO develop a sharp-pencil business case for choosing one strategy over the other. Taxpayers should expect no less.

We already know what our hypothetical CEO would do: hire a topflight academic economist to consult. After all, economists who study “mechanism design” problems have written literally thousands of mathematically rigorous papers about how to design incentives when seller costs are unknown. Government should similarly seek out and listen to these insights.

A systemic problem

The foregoing examples are just that: examples. In truth, many more insights could be mined. Possible places to look include the extensive survey literature on terrorist movements around the world; the literature on why small fringe groups persist and, especially, how they fail; and the largely forgotten history of civil defense and firefighting operations in World War II. In at least a few cases, government’s need is urgent. Nature reported on September 3, 2009, that companies manufacturing artificial DNA that can be used to make smallpox were locked in a standards war over how much to scrutinize incoming orders. Political scientists and economists have been studying when and how government can intervene in this type of industrial controversy since the 1950s. At least potentially, government could use these insights to tip the private sector to whichever outcomes it deems best for the nation. For now, though, many agencies do not seem to understand that intervention is even an option.

What has gone wrong? The basic problem is that homeland security agencies have fallen into the habit of framing research questions first and only later asking for advice. Here, the underlying rationale seems to be that university faculty are smart people who can correctly answer any question posed to them. The problem with this approach, which can often be useful, is that it scales poorly. Indeed, universities have famously rejected such methods of establishing truth—variously labeled “arguments from authority” and “scholasticism”—since the 11th century. And historically, government experiments with telling academics what to study have usually failed. During World War I, U.S. agencies inducted thousands of scientists into the Army and told them what problems to work on. Despite Thomas Edison’s leadership, the system produced few results and was widely seen as a waste of money.

There is, of course, a better way. Although no one denies the role of individual brilliance, the real power of university research comes from community. The discovery of atomic fission, for instance, would never have been possible without 40 years of sustained effort by hundreds of scientists. Similar stories could also be told for radar, operations research, acoustic torpedoes, and most other wartime triumphs. In each case, the key step was realizing that relevant academic knowledge existed and was ripe for exploitation. Crucially, however, no government R&D czar could know this in advance. In the first instance, at least, the decision to pursue a particular research question had to originate with the researchers themselves. Government’s crucial contribution was in knowing how to listen.

Notoriously, this is more easily said than done. In the end, success will depend on government R&D managers’ readiness to become better consumers of university knowledge. Still, talent isn’t everything. Institutional arrangements can markedly improve the chances of success. During World War II, individual faculty members were repeatedly urged to submit research ideas. The resulting proposals were then allowed to percolate upward through a series of peer-review committees until only the best survived. Today, something similar could be accomplished by creating faculty advisory boards that collectively represent a wide assortment of disciplines. In addition to contributing their own ideas, members would monitor their respective disciplines for new opportunities and emerging ideas. (Practically all biotechnology companies maintain academic advisory boards for exactly this reason.) Finally, government could offer “blue sky” grants that give applicants broad discretion to pick their own research topics. Once again, however, peer review would be needed to reject the inevitable bad ideas and identify gems.

Agencies, of course, can and should feel free to ask academics specific questions. The problem, for now, is that the pendulum has swung too far. During the past four years, I have been asked several times to organize “tabletop exercises” in which Coast Guard officers, say, are asked to respond to a simulated terrorist attack. The question is what I or any academic could bring to such an exercise. Does the Coast Guard really think that we can tell them where to put its machine guns? Such examples suggest that homeland security’s current emphasis on predefined, focused topics is becoming wasteful. At the same time, government’s record of eliciting the kinds of really big ideas that won World War II has been spotty.

Until recently, this analysis might have been seen as a criticism of the Bush administration. Today, however, Congress has increasingly delegated R&D incentives to new and presumably more sophisticated specialist agencies like BARDA. Furthermore, the White House is now in the hands of self-described “smart power” advocates. However tentative, these are encouraging signs. Americans have a long history of using university knowledge to defend the country. It is time that the nation reconnect with that tradition.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Maurer, Stephen M. “Using University Knowledge to Defend the Country.” Issues in Science and Technology 26, no. 2 (Winter 2010).

Vol. XXVI, No. 2, Winter 2010