The Dilemma of Environmental Democracy

Scientific rigor and public participation can coexist peacefully only in the catalytic presence of trust and community.

We live in a world of manifest promise and still more manifest fear, both inseparably linked to developments in science and technology. Our faith in technological progress is solidly grounded in a century of historical achievements. In just one generation, we have seen the space program roll back our geographical frontiers, just as the biological sciences have revolutionized our ability to manipulate the basic processes of life. At every turn of contemporary living we see material signs that we human beings have done very well for ourselves as a species and may soon be doing even better: computers and electronic mail, fax machines, bank cards, heart transplants, laser surgery, genetic screening, in vitro fertilization, and, of course, the siren song of Prozac–an ever-growing roster of human ingenuity that suggests that we can overcome, with a little bit of luck and effort, just about any imaginable constraints on our minds and bodies, except death itself.

Increasing knowledge, however, has also reinforced some archetypal fears about science and technology that overshadow the promises of healing, regeneration, material well-being, and unbroken progress. Rachel Carson’s Silent Spring became a global best seller in the 1960s with warnings of a chemically contaminated future in which animals sicken, vegetation withers, and, as in John Keats’s desolate landscape, no birds sing. Genetic engineering, perhaps the great technological breakthroughs of our age, is etched on our consciousness as the means by which ungovernable science may fatally tamper with the balance of nature or destroy forever the meaning of human dignity. Communication technologies speed up the process of globalization, but they also threaten to dissolve the fragile ties that bind individuals to their local communities.

Opinion polls and the popular media reflect the duality of public expectations concerning science and technology. A 1992 Harris poll showed that 50 percent or more of Americans considered science and medicine to be occupations of “very great prestige,” but these ratings had fallen by 9 and 11 percentage points respectively since 1977. Another survey in 1993 showed that more than 45 percent of the public felt that there would be a nuclear power plant accident and significant environmental deterioration in the next 25 years; slightly smaller percentages expected a cure for cancer and a rise in average life expectancy. And while virtually real screen dinosaurs went on the rampage in Jurassic Park, pulling in record audiences worldwide, a science reporter for the New York Times wryly commented that in the eyes of the popular media, “drug companies, geneticists, and other medical scientists–wonder-workers of yesteryear–[were] now the villains.”

But it is not only these black and white images of alternative technological futures that make modern living seem so dangerously uncertain. While we worry about the global impacts of human depredation–endangered species, encroaching deserts, polluted oceans, climate change, the ozone hole–we are also forced to ask questions about who we are, what places we belong to, and what institutions and communities govern our basic social allegiances. There is mix-and-match quality to our cultural and political identities in the late 20th century. With ties to everywhere, we risk being connected to nowhere. Benedict Anderson, the noted Cornell University political scientist, alludes to this phenomenon in his acclaimed monograph Imagined Communities. Modern nationhood assumes for Anderson the aspect of the “lonely Peloponnesian Gastarbeiter [guest worker] sitting in his dingy room in, say, Frankfurt. The solitary decoration on his wall is a resplendent Lufthansa travel poster of the Parthenon, which invites him, in German, to take a ‘sun-drenched holiday’ in Greece….[F]ramed by Lufthansa the poster confirms for him…a Greek identity that perhaps only Frankfurt has encouraged him to assume.” On the other side of the Atlantic, the gifted Czech playwright, political leader, and visionary Vaclav Havel gives us his version of the modern, or perhaps more accurately the postmodern, human condition: “a Bedouin mounted on a camel and clad in traditional robes under which he is wearing jeans, with a transistor radio in his hands and an ad for Coca-Cola on the camel’s back.” The Greek and the Bedouin are citizens of a shrinking world, but their identities have been Balkanized and only imperfectly reformed through the forces of the airplane, the transistor, the Coca-Cola can, and the entire global network of technology.

In this futuristic present of ours, there are only two languages, one cognitive and the other political, that aspire to something like universal validity. The first is science. We may not understand many facts about the environment: why frog species are suddenly disappearing around the world; whether tree-cutting or dam-building is causing floods in the plains of northern India; whether five warmer-than-average years in a decade are a statistical blip or a portent of global warming; or why an earthquake’s ragged path throws down a superhighway but leaves a frail wooden structure standing intact by its side. But we do believe that scientists who are asked to examine these problems will agree, eventually, on ways to study them and will come in time to similar answers to the same starting questions. We expect scientists to see the world the same way whether they live in Japan, India, Brazil, or the United States. This is a comfort in an unstable world. As our uncertainties increase in scope and variety, we turn for answers, not surprisingly, to the authoritative voice of science.

In the domain of politics, democratic participation is the principle we have come to regard as a near universal. The end of the Cold War signaled to many the end of the repressive state and a vindication of the idea that no society can survive that systematically closes its doors to the voices and ideas of its citizens. America’s strength has been in plurality. We now see the idea of pluralism taking hold around the globe, with the attendant notion that each culture or voice has an equal right to be heard. Participatory democracy, moreover, seems at first glance to be wholly congenial with the spirit of science, which places its emphasis on free inquiry, open access to information, and informed critical debate. Historians of the scientific revolution in fact have speculated that the overthrow of esoteric and scholastic traditions of medieval knowledge made possible the rise of modern liberal democracies. Public and demonstrable knowledge displaced the authority of secret, closely held expertise. Similarly, states that could publicly display the benefits of collective action to their citizens grew in legitimacy against alternative models of states that could not be held publicly accountable for their activities.

What I want to do here is complicate the notion that the two reassuringly universal principles of science and democratic participation complement each other easily and productively in the management of environmental risks. I will argue that increasing knowledge and increasing participation–in the sense of larger numbers of voices at the table–do not by themselves automatically tell us how to act or how to make good decisions. Participation and science together often produce irreducible discord and confusion. I will suggest that two other ingredients–trust and community–are equally necessary if we are to come to grips with environmental problems of terrifying complexity. Building institutions that foster both knowledge and trust, both participation and community, is one of the greatest challenges confronting today’s human societies.

The many accents of participation

For more than two centuries, we in the United States have tirelessly cultivated the notion that government decisions are best when they are most open to many voices, no matter how technical the subject matter being considered. A commitment to broad public participation remains a core principle of U.S. environmentalism. Most recently, the environmental justice movement has reprised the belief that autocratic government produces ill-considered decisions, with little chance of public satisfaction, even when decisions are made in the name of expertise. An example from California neatly makes this point. A dispute arose over siting a toxic waste incinerator in Kettleman City, a small farming community in California’s Central Valley with a population of 1,100 that is 95 percent Latino and 70 percent Spanish-speaking. The county prepared a 1,000-page environmental impact report on the proposed incinerator, but it refused to translate the document into Spanish, claiming that it had no legal responsibility to do so.

County officials presumably felt that, having conducted such a thorough inquiry, they would gain nothing more by soliciting the views of 1,100 additional citizens possessing no particular technical expertise. But the Kettleman citizens exercised their all-American right to sue, and a court eventually overruled the county’s decision, saying that the failure to translate the document had prevented meaningful participation. In its input to California’s pathbreaking Comparative Risk Project, the state’s Environmental Justice Committee approvingly cited the judge’s decision, saying, “A strategy that sought to maximize, rather than stifle, public participation would lead to the inclusion of more voices in environmental policymaking.”

Producing more technical knowledge in response to more public demands does not necessarily lead to good environmental management.

But not everybody shares our conviction that more voices necessarily make for more sense in decisions involving science and technology. At about the same time as the Kettleman dispute, public authorities in Germany were concluding that the inclusion of many voices most definitely would not lead to better regulation of the risks of environmental biotechnology. Citizen protests and strong leadership from the Green Party led Germany in 1990 to enact the Genetic Engineering Law, which provided a framework for controlling previously unregulated industrial activity in biotechnology. Responding to citizen pressure, it also opened up participation on the government’s key biotechnology advisory committee and created a new public hearing process for releasing genetically engineered organisms into the environment. These procedural innovations seemed consistent with the European public’s growing interest in participation. They were taken by some as a sign that all liberal societies were converging toward common standards of legitimacy in making decisions about environmental risk.

When put in operation, however, the biotechnology hearing requirement set up far different political resonances in Germany than in the United States. German scientists and bureaucrats were appalled when citizen participants at the first-ever deliberate-release hearing in Cologne demanded that scientific papers be translated into German to facilitate review and comment. Many of these papers were in English, the nearly universal language of science. Officials could not believe that the citizens meant their demand for translation in good faith. No court stepped in, as at Kettleman City, California, to declare that the public had a right to be addressed in its language of choice. Instead, critics of citizen-group involvement denounced the request for German translation as a diversionary tactic that emphasized “procedure” and “administration” at the expense of “substance.” The German government concluded that the hearing requirement could not advance the goal of informed and rational risk management. An amendment to the law in 1993 eliminated the hearing requirement just three years after its original enactment.

It would be easy at this point to draw the conclusion that the Germans were wrong and that full participation, in the sense of including more citizens on their own terms in technical debate, would simply have been the right answer. Indeed, this was the position taken by a thoughtful German ecologist I met in Berlin. A member of the state’s biotechnology advisory committee and a participant in the unprecedented Cologne hearing, he was not much worried by the unruly character of those proceedings. He commented that in matters of democracy Germany was still a novice, with ten years of lessons to learn from the United States. In time, he suggested, people would lose their discomfort with the untidiness of democracy, and more public hearings with multiple voices would prevail on the German regulatory scene.

But questions about science and governance seldom conform to straightforward linear ideas of progress toward common social and cultural goals. A different point of view guides participatory traditions in the Netherlands, a country that does not need to apologize for either its lack of environmental leadership or its lack of democratic participation. In the Netherlands, as in most of Europe, the concept of public information concerning environmental risks has evolved quite differently from the way we know it in the United States. Americans demand full disclosure of all the facts, whereas Europeans are often contented with more targeted access to information. The contrast is quite striking in the context of providing information about hazardous facilities. We in the United States have opted for a right-to-know model, which declares that all relevant information should be made available in the public domain, regardless of its complexity and accessibility to lay people. The Europeans, by contrast, prefer a so-called “need to know” approach, under which it is the government’s responsibility to provide as much information as citizens really need in order to make prudent decisions concerning their health and safety.

To an American observer, the European approach looks paternalistic and excessively state-centered. It delegates to an impersonal and possibly uncaring state the responsibility for deciding which and how much information to disclose. To a European, the American approach seems both costly and unpredictable, because it assumes (wrongly, many Europeans think) that people will have the resources and capacity to interpret complex information. Empirically, the American assumption is clearly not always consistent with reality. The Spanish speakers in Kettleman City were fortunate in receiving help and support from California Rural Legal Assistance, but others, less well situated, could well have failed to access the information in the environmental impact report. Europeans also see the U.S. position as privatizing the response to risk by making individuals responsible for acquiring and acting on information. This approach, to them, threatens the ideal of community-based approaches to solving communal problems. It also potentially overestimates the individual’s capacity to respond to highly technical information.

The idea of participation, then, comes in many flavors and accents. What passes as legitimate and inclusive in one country may look destabilizing and anarchical in another, especially when the subject matter is extremely technical. Different models of participation entail different costs and benefits. Inclusion in the American mode, for example, is expensive, not only because resources are needed to make information widely available (the Kettleman environmental report illustrates this point) but also because, as we shall see, including more perspectives can increase rather than decrease the opaqueness of decisions. It can add to our uncertainties about how to proceed in the face of scientific and social disagreement without offering guidance on how to manage those uncertainties.

Openness and transparency

Risk controversies challenge the intuitive notion that the most open decisions–that is, those with the most opportunities for participation–necessarily lead to the greatest transparency–that is, to maximum public access and accountability. Let us take as an example the case of risk assessment of chemical carcinogens as it has developed in the United States. Back in the early 1970s, Congress first adopted the principle that government should regulate risks rather than harms. This approach represented an obvious and appealing change from earlier harm-based approaches to environmental management. It explicitly recognized that government’s job was to protect people against harms that had not yet occurred. In an age of increasing scientific knowledge and heightened capacity to forecast the future, compensating people for past harms no longer seemed sufficient.

The principles that federal agencies originally used to regulate carcinogens in the environment were relatively simple and easy to defend. They were rooted in the proposition that human beings could not ethically be exposed to risks. Hence, indirect evidence was needed, and results from animal tests began to substitute for observations on humans. As initially construed by the Environmental Protection Agency (EPA), only about seven key principles were needed to make the necessary extrapolations from animal to human data. These were easily understandable and could be stated in fairly nonmathematical descriptive language. For example, EPA decided that positive results obtained in animal studies at high doses were reliable and that benign and malignant tumors were to be counted equally in determining risk.

Very quickly, however, EPA had to start refining and further developing these principles as it came under pressure from industry and environmentalists to explain its scientific judgments more completely. The principles grew in number and complexity. Qualitative assessments about whether or not a substance was a carcinogen gradually yielded to more probabilistic statements about the likelihood that something could cause cancer under particular circumstances and in particular populations. Paradoxically, as risk assessment became more responsive to its multiple political constituencies, it became less able to attract and hold their allegiance.

In the 20 years or so since its widespread introduction as a regulatory tool, cancer risk assessment has evolved into an immensely complex exercise, requiring sensitive calculations of uncertainty atmany different points in the analysis. Agencies have committed increasingly more technical and administrative resources to carrying out risk assessments. Industry and academia have followed suit. New societies and journals of risk analysis have sprung up as the topic has become a focal point of professional debate. But as the methods have grown in sophistication, the process of making decisions on risk has arguably grown both less meaningful to people who lack technical training and less responsive to policy needs.

The task ahead is to design institutions that will promote trust as well as knowledge community as well as participation.

EPA’s dioxin risk assessment of the mid-1990s is a case in point. Nearly three years in the making, this 2,000-page document incorporating the latest science was published ten years after the agency’s first risk assessment for this compound. Yet, public reception to the document was already skeptical, even jaundiced. A cover story in the journal Environmental Science and Technology struck a common note of disbelief: Sitting before a crowd of puzzled-looking members of the public, a man held up a document saying, “Dioxin Risk: Are We Sure Yet?” In and around Washington, people began to talk about the dioxin risk assessment as an example of hypersophisticated but politically irrelevant analysis.

Evidence from focus groups and surveys shows that the public increasingly does not understand and feels little confidence in the methods used by experts to calculate risk. Sociologists of risk have argued that this gap between what experts do and what makes sense to people accounts for a massive public rejection of technical rationality in modern societies. More prosaically, the growing expenditure and uncertainty associated with risk assessment has led to disenchantment with its conclusions in national and local political settings. The dioxin assessment is good indication that certainty, in scientific knowledge and political action, may have to be achieved through means other than extensive, methodologically perfected, technical evaluations of risk.

Conflict and closure

We have stumbled, it seems, on a hidden cost of unconstrained participation. There is considerable information from a wide range of environmental controversies that open and highly participatory decisionmaking systems do much better at producing information than at ending disagreements. When issues are contested, neither science nor scientists can be counted on to resolve the uncertainties that they have exposed. Uncertainties about how much we know almost invariably reflect gaps that cannot be filled with existing resources, and this lack of knowledge relates to our understanding of nature as well as society. At best, then, scientists can work with other social actors to repair uncertainty. Put differently, producing more technical knowledge in response to more public demands does not necessarily lead to good environmental management. We also need mechanisms for deciding which problems are most salient, whose knowledge is most believable, which institutions are most trustworthy, and who has the authority to end debates.

The metaphor of “dueling” between experts is often heard in the world of regulation. Pressure has grown on the scientific community to supply plausible, quantitative estimates of likely impacts under complex scenarios that can never be completely observed or validated. Modeling rather than direct perceptual experience underlies decisions in many fields of environmental management, from the control of carcinogenic pesticides to emissions-trading policies for greenhouse gases. Such modeling often produces a policy stalemate when scientists cannot agree on the correctness of alternative assumptions and the numerical estimates they produce. How are decisions reached under these circumstances?

In one instructive example, a dispute arose between the U.S. Atomic Energy Commission (AEC) and Consolidated Edison (Con Ed), a major New York utility company, concerning the environmental impacts of a planned Con Ed facility at Storm King Mountain. Scientists working for the AEC and Con Ed sought to model the possible effects of water withdrawal for the plant’s cooling system on striped bass populations in the Hudson. The result was a lengthy confrontation between “dueling models.” Scientists refined their assumptions but continued to disagree about which assumptions best corresponded to reality. Each side’s model was seen by the other as an interest-driven, essentially unverifiable surrogate for direct empirical knowledge, which under the circumstances seemed impossible to acquire.

In the end, the parties agreed to a greatly simplified model by shifting their policy concern from hard-to-measure, long-term biological effects and focusing instead on mitigating short-term detriments. The new model stuck more closely to the observable phenomena than the earlier, more sophisticated but untestable alternatives. It became possible to compile a common data base on the annual abundance and distribution of fish populations. Relying on a technique called “direct impact assessment,” the experts now came to roughly similar conclusions when they modeled specific biological endpoints. In this case science did eventually contribute to closing the controversy but only after the underlying policy debate was satisfactorily reframed.

The public controversy over the ill-fated Westway project in New York City provides another example. Contention centered in this case on a plan to construct a $4-billion highway and waterfront development project along the Hudson River, creating prime new residential and commercial real estate but also changing the river’s course, permanently altering the shape of lower Manhattan, and influencing in unknown ways the striped bass population. Two groups of expert agencies attempted to assess Westway’s biological impact. Favoring construction were several state and federal “project agencies,” the U.S. Army Corps of Engineers, the Federal Highway Administration (FHWA), and the New York State Department of Transportation; urging caution were three federal “resource agencies,” EPA, the Fish and Wildlife Service, and the National Marine Fisheries Service. Their differences crystallized around an environmental impact statement (EIS) commissioned by FHWA that declared the proposed project area to be “biologically impoverished” and hence unlikely to be harmed by the proposed landfill. Between 1977 and 1981, the resource agencies repeatedly criticized the EIS, but the project agencies pushed ahead and, in March 1981, acquired a landfill permit for Westway.

It was an unstable victory. A double-barreled review by a hostile federal court and the House Committee on Government Operations uncovered numerous methodological deficiencies in FHWA’s biological sampling methods. What began as an inquiry into the scientific integrity of the EIS turned into a probing critique of the moral and institutional integrity of the project agencies, particularly FHWA and the Army Corps. Congressional investigators concluded that both agencies had violated basic canons of independent review and analysis. Their science was flawed because their methods had not been sufficiently virtuous. The House report accused the project agencies of having defied established norms of scientific peer review and independence. For example, the Army Corps had turned to the New York Department of Transportation, “the very entity seeking the permit,” for critical comment on the controversial EIS.

Doubts about the size, cost, irreversibility, and social value of the proposed development helped leverage the scientific skepticism of the anti-Westway forces into a thoroughgoing attack on the proponents’ credibility. Under assault, and with rifts exposed between their ecology-minded and project-minded experts, the project agencies could defend neither their intellectual nor institutional superiority. Their scientific and moral authority crumbled simultaneously, and their opponents won the day without ever needing to prove their own scientific case definitively.

Responding to uncertainty

Science and technology have given us enormously powerful tools for taking apart the world and examining its components piecemeal in infinite, painstaking detail. Many of these, including the formal methodologies of risk assessment, enhance our ability to predict and control events that were once considered beyond human comprehension. But like the ancient builders of the Tower of Babel, today’s expert risk analysts face the possibility of having their projects halted by a confusion of tongues, with assessments answerable to too many conflicting demands and interpretations. The story of EPA’s dioxin risk assessment highlights the need for more integrative processes of decisionmaking that can accommodate indeterminacy, lack of knowledge, changing perceptions, and fundamental conflicts. If after 15 years and 2,000 pages of analysis, one can still ask “are we sure yet,” then there is reason to wonder if the basic prerequisites for decisionmaking under uncertainty have been correctly recognized.

I have suggested that to be sure of where we stand with respect to complex environmental problems we need not only high-quality technical analysis but also the institutions of community and trust that will help us frame the appropriate questions for science. To serve as a basis for collective action, scientific knowledge has to be produced in tandem with social legitimation. Insulating the experts in closed worlds of formal inquiry and then, under the label of participation, opening up their findings to unlimited critical scrutiny appears to be a recipe for unending debate and spiraling distrust. This is the core dilemma of environmental democracy.

The task ahead then is to design institutions that will promote trust as well as knowledge, community as well as participation–institutions, in short, that can repair uncertainty when it is impossible to resolve it. We know from experience that the scale of organization is not the critical factor in achieving these objectives. Authoritative knowledge can be created by communities as local, self-contained, and technically “pure” as a research laboratory or as widely dispersed and overtly political as an interest group, a social movement, or a nation state. The important thing is the organization’s capacity to define meaningful goals for scientific research, establish discursive and analytic conventions, draw boundaries between what counts and does not count as reliable knowledge, incorporate change, and provide morally acceptable principles for bridging uncertainty. There are many possible instruments for achieving these goals, from ad hoc, broadly participatory hearings to routine but transparent processes of standardization and rule implementation. But these methods will succeed only if scientific knowledge and the shared frames of meaning within which that knowledge is interpreted are cultivated with equal care.

Fortunately for our species, environmental issues seem exceptionally effective at engaging our interpretive as well as inquisitive faculties. Communities of knowledge and trust often arise, for example, through efforts to protect bounded environmental resources, such as rivers, lakes, and seas, which draw experts and lay people together in a mutually supportive enterprise. The formal, universal knowledge of science combines powerfully in these settings with the informal, but no less significant, local knowledge and community practices of those who rely on the resource for recreation, esthetics, livelihood, or commerce. International environmental treaties provide another model of institutional success. Here, the obligation to develop knowledge in the service of a prenegotiated normative understanding keeps uncertainty from proliferating out of control. In the most effective treaty regimes, norms and knowledge evolve together, as participants learn more about how the world is (scientifically) as well as how they would like it to be (socially and politically). Last but not least, environmental policy institutions of many different kinds–from established bodies such as EPA’s Science Advisory Board to one-time initiatives such as California’s Comparative Risk Project–have shown that they can build trust along with knowledge by remaining attentive to multiple viewpoints without compromising analytic rigor.

Science and technology, let us not forget, have supplied us not merely with tools but with gripping symbols to draw upon in making sense of our common destiny. The picture of the planet Earth floating in space affected our awareness of ecological interconnectedness in ways that we have yet to fathom fully. The ozone hole cemented our consciousness of “global” environmental problems and forced the world’s richer countries to tend to the aspirations of poorer societies. These examples illuminate a deeper truth. Scientific inquiry inescapably alters our understanding of who we are and to whom we owe responsibility. It is this dual impact of science–on knowledge and on norms–that environmental institutions must seek to integrate into their decisionmaking. How to accomplish this locally, regionally, and globally is one of the foremost challenges of the next century.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Jasanoff, Sheila. “The Dilemma of Environmental Democracy.” Issues in Science and Technology 13, no. 1 (Fall 1996).

Vol. XIII, No. 1, Fall 1996