The U.S.-Mexico connection
Christopher Wilson’s “U.S. Competitiveness: The Mexican Connection,” (Issues, Summer 2012) seeks to open our minds to an alternative way to think about competitiveness. Too much of the debate on competitiveness has swung between those who stress deregulation and those who advocate a form of industrial policy. Wilson suggests that an alternative path should aim at deepening integration between the United States and Mexico. He is right, but his insights would be more powerful if they were stretched to include all of North America and if we looked beyond joint manufacturing to a fundamentally different continental relationship.
Instead of thinking about Mexico as a source of grisly drug-related violence and poor illegal aliens, Wilson reminds us that Mexico is our second-largest market, the second-largest source of energy imports, the destination of about $100 billion in U.S. direct investment, and a growing source of foreign direct investment. Canada, however, is the first market for U.S. goods, and it is also the largest source of U.S. energy imports.
Our continent is the largest free-trade area in the world, but its leaders have failed to build on the early success of the North American Free Trade Agreement (NAFTA). Although the North American share of global production increased from 30 to 36% between 1994 and 2001, it has since shrunk back to where it was before 1994.
Wilson proposes that the United States and Mexico expedite plans to make the border more efficient, expand transportation links and infrastructure, dismantle rules-of-origin provisions one product at a time, and confront shared challenges in Asia and elsewhere. These are all good ideas, but they would be much better if Canada were an integral part of the strategy.
The problem is that Canada shows little interest in collaborating with Mexico, and the United States seems to prefer to approach one country and one problem at a time. Because each of the problems involves powerful domestic or bureaucratic interests that prefer the status quo, we have made almost no progress on Wilson’s agenda.
What is needed is a farsighted vision that recognizes that the best way to improve competitiveness is to create a seamless continental market. The best way to ensure American security is to turn the borders into a bridge of cooperation where officials of all three countries work from the same set of procedures. The best way to ensure that Mexico moves to the first world economically and that Canada designs new trilateral institutions is for the United States to adopt a new approach to leadership that respects its neighbors as partners in the global competition with Asia.
The only way to achieve these goals and move forward is to begin with the idea of a continent of promise. If all three countries begin to see their challenges as North American, they will find continental solutions where they previously saw only chronic national problems.
The conventional wisdom in the United States about NAFTA and Mexico has never been accurate. For years, the prevailing view in many circles has been that NAFTA cost the United States millions of jobs that relocated to Mexico seeking lower labor costs and an environment with lower health and safety standards. A more accurate description of what actually happened is that NAFTA extended the competitive life of thousands of U.S. companies and jobs by allowing them to relocate part of their production to Mexico, where labor costs were more competitive.
Christopher Wilson’s article does an excellent job of describing how the “Mexican Connection” has served U.S. economic interests. The evidence that he presents illustrates some of the many ways in which the trade and investment flows that NAFTA enabled caused the two economies to become much more closely integrated, in a relationship that is best characterized as a partnership and not a competitive rivalry.
The evidence is clear that NAFTA has been highly beneficial to both economies, improving the competitiveness of entire economic sectors and extending the life of thousands of companies that might otherwise have failed, on both sides of the border. The impact of NAFTA has been much greater than simple trade statistics suggest: It created incentives that caused entire sectors to restructure operations, leading thousands of companies on both sides of the border to specialize in activities in which their relative costs and productivity made them more competitive. The restructuring process caused Mexico’s economy to become more specialized in labor-intensive manufacturing activities (such as the assembly of autos) while providing incentives for U.S. companies to focus on high–value-added activities (such as the design and manufacture of high–value-added components). What happened as a result of NAFTA is exactly what trade theory predicted should happen.
The global economic crisis has made specialization even more valuable. To succeed in the current economic context, it is indispensable to ensure that products are highly competitive. Hence, producing in locations where labor and logistics costs are lowest has become imperative. For this reason, Wilson argues that greater attention must be paid to ensuring that the cost of transporting goods between the three NAFTA partners is optimized, but without putting the security of any of the partners at risk. He also argues that is vitally important to ensure that the NAFTA partners develop a shared view of their global trade interests, acting in concert in trade negotiations.
Wilson is absolutely right, but why stop there? His recommendations focus primarily on ensuring that the partnership continues to work effectively in support of shared production processes. But focusing exclusively on tradable commodities can cause the governments of the three countries to overlook enormous competitive opportunities in the service sectors of the three countries.
The opportunities in services are enormous. For instance, rising U.S. health costs could be contained by allowing health care providers in Mexico and Canada to compete in the United States. Currently, both countries are capable of providing high-quality health services at much lower costs than the United States.
The opportunities do not stop there: Mexico’s economy would gain competitiveness if it had access to the worldclass engineering and construction services available in the United States and Canada; and the three partners would gain competitiveness by integrating their electricity sectors or agreeing to the same technological standards in petroleum services and telecommunications. U.S. agriculture would remain competitive if a formula can be found to make inexpensive farm labor systematically available when and where it is needed.
Wilson has written a very valuable essay. His arguments are correct, but the opportunities are actually much broader than those that he points to and should be captured for the good of the three partners in the region.
Drug policy research
In “Eight Questions for Drug Policy Research” (Issues, Summer 2012), Mark A. R. Kleiman, Jonathan P. Caulkins, Angela Hawken, and Beau Kilmer argue that more of the research on drug abuse should be guided by the goal of reducing social costs. Those costs are associated not only with the multiple impairments of those who abuse, but also with the violence associated with underground drug markets and the great burden that drug enforcement places on the criminal justice system.
Last February, the director of the National Institute on Drug Abuse (NIDA) responded to the president’s budget proposal for fiscal year 2013 with a statement that NIDA’s research priorities were to develop better medications to combat addiction, support basic research on genetics and brain functioning, and translate scientific evidence into improved clinical treatment. These goals reflect the medical model that the problem is “addiction,” which is a “disease” that can be combated through treatment or prevented through vaccines. That may be a reasonable focus, given the NIDA mission, but we are left with the question of which scientific agency is going to engage with the dominant approach of drug policy as it actually exists in the United States today. Hundreds of thousands of drug offenders, mostly dealers, are arrested, convicted, and imprisoned each year in the quest to reduce the availability of drugs. Could scientific research improve on this program, in the sense of developing a more cost-effective approach to reducing abuse and its costly consequences? The answer is surely yes, and the Kleiman et al. article provides some promising leads.
A case in point is HOPE, which the authors briefly discuss. Stunning results with “coerced abstinence” for felony convicts were reported from a randomized controlled trial in Honolulu. The U.S. Department of Justice Office of Justice Programs deserves much credit for funding a new experimental replication in four sites. The basic approach may seem to contradict the medical model, because the only mechanism by which the threat of swift and sure punishment could reduce drug use is the volition of the drug-abusing convicts. HOPE suggests that much of the drug abuse in this population could be eliminated by getting the incentives right, and at no net cost to society. If that result is replicated, it will surely inspire discomfort with the traditional medical model, while providing directly relevant guidance to policymakers.
The HOPE story has a larger moral to it. Translational research is usually understood in terms of scientific understanding, as developed in the laboratory, being put to work in shaping practice. But the apparent success of coerced abstinence is an example where the “translation” should go in the opposite direction. If brain science ignores the role of volition in drug abuse, then it is missing out on what appears (from this field experience) to be an important mechanism, and one that is particularly relevant to policy design. The translation effort should be in both directions. We need to make sure that federal funding agencies encourage this exchange.
In “Communicating Uncertainty: Fulfilling the Duty to Inform,” (Issues, Summer 2012), Baruch Fischhoff offers a cogent and balanced set of actions that experts and decisionmakers can take to improve the way in which uncertainty is handled when experts are asked to inform decisions. In particular, he highlights how the concepts and consequences of uncertainty, probability, and confidence are often misunderstood and sometimes conflated, and how these same problems provide insights into the many potential strategies for addressing them.
I suggest several points of clarification that both strengthen and complicate Fischhoff ’s arguments. First, policy-and decisionmaking face increased scrutiny from the public, and this scrutiny can lead scientists and decisionmakers to approach uncertainty in ways not fully addressed by Fischhoff. Skepticism about climate change is an excellent example of how the public can become aware of an issue but profoundly misunderstand the nature of uncertainty within it. The political views that this misunderstanding motivates can then drive policy and decisionmaking, even when policymakers actually have a deeper understanding than the public of how uncertainty should alter decisions, because they ultimately respond to their constituents. Scientists may also begin to present uncertainty differently in the face of this public response, essentially trying to influence decisions to be more in line with where the experts think they should be, knowing the public’s misunderstanding. Effective decisionmaking clearly requires that scientists and decisionmakers also engage the public about the realities and consequences of uncertainty.
In conservation biology, my research field, issues of uncertainty play out in concrete and visceral ways; for example, when decisions must be made about how to manage threatened or endangered species. Mishandling of uncertainty in such decisions affects not only people but also the species faced with potential extinction. Helen Regan and colleagues have provided an excellent overview of the types of uncertainty that arise in these contexts and suggested concrete solutions to them. They highlighted two classes of uncertainty: epistemic and linguistic. The former has to do primarily with the challenges in science of accurately and precisely measuring things that cannot be perfectly observed or counted, such as measurement error, natural variation, and potential observer bias; the latter touches on many of the issues Fischhoff highlights regarding different uses and understandings of concepts and words. I suggest this taxonomy of uncertainty as an additional resource, not as something to replace or supersede that offered by Fischhoff, because it can help to further clarify where and why problems arise for experts and decisionmakers in addressing uncertainty.
In the end, even if all aspects of uncertainty are clearly articulated and understood, there will still be potentially large differences in how people choose to operationalize that uncertainty. People have very different levels of risk tolerance, and decisionmaking under uncertainty is ultimately a proposition of taking risks. It is difficult to change people’s risk tolerance, but decisionmaking could be significantly improved if the many suggestions Fisch hoff offers were regularly implemented.
Baruch Fischhoff ’s excellent article on communicating uncertainty resonates with my experience in the U.S. government and prompts me to underscore two of his observations. Decisionmakers often receive a surfeit of advice but a paucity of information. Advice is usually accompanied by arguments to justify or discredit particular options. Good arguments can win debates but cannot guarantee good decisions. Good decisions, or at least well-informed decisions, require more information and better understanding of the problem. Wise decisionmakers turn to experts not because they can give better advice, but because they can enhance understanding of the matter to be decided.
Experts are most helpful—and most valuable—when they communicate what is known, what is not known, the volume and quality of information, and degrees of certainty and uncertainty. It often takes an expert to explain why abundant information sometimes provides little insight into a situation, or when experience, theory, and a few clues permit confident judgments about what is happening and why. Decisionmakers value assessments provided by experts, but they value at least as much what experts can tell them about confidence levels, probabilities, and uncertainties. If information and analysis are the ice under decisions, experts owe it to decisionmakers to tell them how thick or thin that ice is.
Experts who fail to convey uncertainties and confidence levels effectively have disserved those they are supposed to assist as surely as do experts who omit key facts or fail to challenge decisionmaker assertions they judge to be wrong. Fischhoff is correct that analysts and other experts often are reluctant to reveal what they do not know for fear that doing so will undercut their supposed expertise or convince customers that they have been less than diligent when pursuing answers to difficult questions. Conversely, decisionmakers often fail to press experts for more information about their judgments for fear of appearing less knowledgeable than they should be or risking discovering that there is little support for the position they hold.
Fischhoff also touches upon the twoaudiences problem. One audience is the decisionmaker with whom an expert works on a regular basis. Familiarity and trust facilitate clear communication, often without the need to rehearse points discussed previously. The more effectively decisionmakers communicate what they want to know, what they want to achieve, and what they think they already know, the greater the ability of experts to correct mistaken understanding, address critical issues, and ensure that the decisionmaker understands the reasons for high or low confidence in specific judgments.
The second audience consists of decisionmakers and experts who do not know one another and generally communicate indirectly. Unless precautions are taken, there is considerable risk that materials tailored for a specific and trusted customer will be misunderstood by other recipients. This is a particular problem with respect to communicating uncertainty, because what is conveyed in writing to a trusted interlocutor able to ask questions directly is likely to be too sketchy to convey uncertainties and confidence effectively to third parties. Conversely, materials prepared for general audiences risk being too diffuse to meet the needs of busy customers. This makes it even more important to provide clear confidence and probability assessments.
Baruch Fischhoff and his colleagues at Carnegie Mellon University have consistently been among the strongest advocates for the proposition that acknowledging uncertainty is a strength, not a weakness, of any analysis. His article is once again in the vanguard, as he emphasizes that much of the blame for a vicious circle of overconfident assessments and precarious decisions rests at the feet of the recipients of risk and economic analyses, who have often welcomed overconfidence and rewarded candor by throwing the messengers under the proverbial bus. I especially applaud his observation that some analytic disciplines are complicit by their eagerness to “oversell their wares.” I am currently leading a National Science Foundation project on the difficulty of making sound cost/benefit decisions when regulatory economists report costs with less rigor (and especially with less attention to uncertainty) than their counterparts report risk and benefit information.
But I hope Fischhoff would agree that overconfidence is not the only trap we must avoid, and that scientific and economic information are not the only areas where uncertainty matters.
First, what Fischhoff calls “exaggerated hesitancy” he might more bluntly have called “manufactured uncertainty” (a term coined by epidemiologist David Michaels) or perhaps “strategic diffidence.” We expect some scientists with financial stakes in thwarting regulation to emphasize potentially exculpatory information (or to concoct exculpatory theories) to extend the “left-hand tail” of the uncertainty range for risk. But I believe we should also expect scientists who are public servants to resist the temptation to go along to get along by accepting more and more “underconfidence” uncritically. For example, the Environmental Protection Agency has always declined to quantify the uncertainty around its estimates of carcinogenic potency, preferring to present a single “plausible upper bound” number but to always surround it with the caveat that the potency “could be as low as zero”—whether or not that deflating verbal hedge has any reasonable basis in theory or evidence. The agency is now poised to issue a sweeping repudiation of its own 30-year track record of requiring regulated industries to present compelling evidence in order to supplant time-tested default assumptions with new assertions of safety. Sometimes, wider confidence bounds can reflect acquiescence rather than humility, and can be just as fatal to understanding as narrow or nonexistent bounds.
Second, misinterpreting expressions of uncertainty, even when they are correctly assessed, can lead decisionmakers and the public astray. Many distributions of uncertainty (and inter-individual variability) are right-skewed, but how many decisionmakers (or experts) understand that in such cases, a high percentile such as the 90th may yet be an underestimate of the expectation of the entire distribution? And most nontrivial decisions require the comparison of two or more quantities or decision alternatives: Which of two risks is larger, or which side of the decision ledger (costs versus benefits of action) is greater? Suppose the uncertain costs of reducing a particular environmental hazard follow a “normal” (bell-shaped) distribution, with a mean of $5 billion and confidence bounds of $1 billion and $9 billion. Suppose further that the monetized benefits of doing so have exactly the same uncertainty distribution (same mean and same upper and lower bounds). Is it correct to overlay and subtract the two distributions and tell the public that there is zero net benefit or net cost, regardless of whether we reduce or ignore the hazard? The fact is that unless the uncertainties happen to be tightly correlated with each other, it is much more honest to say that the consequences of controlling this hazard are so uncertain that they could benefit society by as much as $8 billion or could instead cost us as much as $8 billion (high-minus-low bounds or low-minus-high). Analysts and decisionmakers have to get better, among other examples, at explaining the vast distinction between “inconsequential” and “either terrible or terrific, with equal probability.”
I also see two areas where we need to apply Fischhoff ’s wisdom about uncertainty more broadly. When you are making a decision for someone else (or trying to provide them with the raw materials to decide), not knowing their actual preferences creates an uncertainty that is tempting to ignore or average away. There is a growing literature, for example, on the fallacy inherent in medical guidelines such as the rule of thumb that pregnant women over age 35 should strongly consider amniocentesis. This age may indeed be the point at which the miscarriage risk of the procedure itself and the age-related risk of carrying a fetus with Down syndrome are equal, but the single number is only the point where the expected harms are also equal for a woman who regards the two adverse outcomes as identically dire. There really is a distribution of tipping points that traces the distribution of personal ethical judgments across the population.
This leads to the expansion I believe Fischhoff also impatiently awaits: the marshaling of uncertainty to describe choices, not just evidence. I think we need new language that treats important decisions for what they are: attempts to minimize the probability and/or the severity of the inevitable occasions when we will choose wrongly. Only honest and comparative statements of uncertainty in outcomes can help us make the right choices for the right reasons and console us when the best we can do turns out to be not good enough. Analysts and decisionmakers need to collaborate more, not just to understand things better, but to choose more wisely. Fischhoff ’s recommen- dations put us squarely on the path to that goal.
The trouble with STEM
It has become popular to combine science, technology, engineering, and mathematics into the acronym STEM when discussing education (see Robert D. Atkinson’s article “Why the Current Education Reform Strategy Won’t Work,” Issues, Spring 2012). But as an engineer, I have some difficulty with STEM. Science requires curiosity to produce new knowledge, technology requires skill, mathematics requires logic, but engineering requires creativity and judgment. Engineering, to be sure, requires science, technology, and mathematics (STM) as tools to ensure that a design will be successful, but the first step in the design process is invention: the critical beginning of a new idea. In this respect, engineering is different from STM.
As a child, I had many interests and dreamed that someday I would become either a scientist or an artist. That dream persisted until high school, when I learned about engineering. The path I chose combined both of my passions into one profession.
The humanitarian and artistic side of engineering is often downplayed, if not ignored, in engineering education. Engineering science courses, essentially applied physics, use mathematical tools almost exclusively to solve problems. These engineering science courses give nascent engineers the technological skills to analyze a trial design to assess the likelihood of success. But the design process involves synthesis as well as analysis, and the synthesis part of this process is not given the same attention as the analysis part. We engineering educators depend very much on the native creativity of our students to be able to synthesize.
Along with synthesis, engineers must learn judgment: how to go about the process of determining whether or not a new concept will work, often before the mathematical analysis stage. Judgment requires experience, and experience usually involves failure on somebody’s part. If that failure does not happen in school, where its consequences are limited, then that failure and its accompanying experience and growth of engineering judgment has to be developed in the workplace. That is why employers of just-graduated engineers often give their new employees mentored dummy projects to work on during the first six months of their tenure. An engineer who has not developed engineering judgment by that time is either not retained or faces potentially more catastrophic failure later in her/his career.
The Accreditation Board for Engineering and Technology requires the inclusion of humanities courses in engineering curricula. The reason is clear: Engineers often need to include nonscientific, non-mathematical, even nontechnological features in their designs. There are artistic, historical, political, psychological, financial, spatial, and other elements that must be considered for a prototype engineering design to be successful. There have been many examples of new products that have failed because they didn’t look right, were named wrong, didn’t feel right, or were too complicated. All of these fit into the category of engineering judgment.
The reason that the “E” in STEM should be distinguished from the STM part is that engineering can appeal to those who are not so good at STM, but who have superior skills or interest in the softer side of engineering. They can become great patent lawyers, engineering managers, recruiters, teachers, insurance adjusters, human factors engineers, or sales/applications engineers. Some engineering disciplines are more people-oriented than others. I have had “C” students in my Transport Processes and Electronic Design classes who could talk a lawyer into the ground. With the technical backgrounds that they had barely mastered, they used the softer skills in which they excelled to find successful careers on the fringes of engineering. We need these kinds of people, too.
The American Association of University Women’s 2010 report Why So Few? Women in Science, Technology, Engineering, and Mathematics deplores the fact that it is difficult to attract young women into STEM fields. One reason they give for this is the stronger interest young women have in people and interpersonal relationships that are not usually associated with STEM fields. But if it were better known that engineers do not have to be nerds and recluses, that as engineers they could have a rich involvement with other people, then the “E” part of STEM might attract more women and men to the profession. This is the problem with the acronym STEM; it does not do justice to the full range of opportunities available in engineering.