In “Leaving No-Woman’s Land” (Issues, Fall 2024), Jewel Kling, Sara Collina, and Lindy Elkins-Tanton provide a resoundingly accurate description of the current state of women’s sexual health. As a women’s health internist specializing in sexual health, I regularly meet women struggling with sexual health concerns who have been neglected, ignored, and gaslighted by the medical community. They often present with low libido, trouble with arousal or orgasm, or pain with sex. For many who report such sexual health concerns to their clinicians, the clinicians too often tell the women the conditions are normal or all in their heads.
As Kling, Collina, and Elkins-Tanton recount, many factors have led the medical community to leave women’s sexual health in the dark, preventing women from accessing comprehensive, compassionate care. One factor is the lack of medical education in women’s sexual health at the medical school, residency, fellowship, and attending (independent physician) level. I personally did not learn about women’s sexual health until my women’s health fellowship, eight years into my medical training. In addition, there is insufficient insurance reimbursement for sexual health-related visits and too few medications approved by the US Food and Drug Administration for female sexual dysfunction. There are two FDA-approved medications for hypoactive sexual desire disorder—low desire/libido—but they are approved only for premenopausal women. There are no approved treatments for low libido in postmenopausal women compared with the 26 or more approved medications for treating male sexual problems. Finally, and critically, there is a paucity of women’s sexual health research.
I spend most patient visits explaining that there is insufficient research on women’s sexual health to enable me to counsel women on the true benefits and risks of treatments. Therefore, I am often stuck performing anecdotal medicine, treating the patient in front of me based on stories and experiences of other patients and colleagues, as opposed to clear, well-formulated science. In most other fields of medicine, patients are provided with well-formulated treatment plans based on the gold standard type of medical research: randomized controlled trials (RCTs). RCTs provide an unbiased, clear relationship between a treatment and its effectiveness, allowing clinicians to provide their patients with efficacy rates and true benefit versus risk data—information vital to informed decisionmaking by both clinicians and patients.
I spend most patient visits explaining that there is insufficient research on women’s sexual health to enable me to counsel women on the true benefits and risks of treatments.
In women’s sexual health, due to lack of funding for RCTs, clinicians often must rely on “observational” research that does not allow us to know the true unbiased effectiveness of a treatment or intervention. This kind of research leads us to make treatment decisions without being able to confidently guarantee outcomes to our patients. This leaves women’s sexual health management in a vacuum, where different medical professionals recommend different treatments based on sparse science. This frustrates me as a sexual health doctor because it often confuses patients, leads to misinformation about treatments, and can impact the patient-clinician relationship and trust as patients may receive contradictory recommendations from other medical professionals or from social media.
The US Department of Health and Human Services’ commitment to provide $100 million for women’s health research is a helpful start, and I am hopeful this will ultimately lead to more evidence-based recommendations and FDA-approved treatments for women’s sexual health disorders. I also encourage medical educators to include women’s sexual health at all levels of medical education. Without this, students and clinicians will be unable to provide sufficient care to their patients and women will continue to suffer with sexual dysfunction.
Talia Sobel
Assistant Professor of Medicine
Women’s Health Internal Medicine
Mayo Clinic Arizona, Scottsdale
Imo Nse Imeh: Monuments to Our Skies
Imo Nse Imeh, Study, Residence of Evermore, 2024 (1 of 2 panels), oil paint, India ink, acrylic ink, charcoal on unstretched canvas, 61 x 40 inches.
During the summer of 2020—amid the pandemic, lockdowns, and widespread displays of racially motivated violence that appeared on screens across the United States—artist Imo Nse Imeh was deeply engaged with questions of faith, trust, belief, and redemption, particularly in the context of Black communities. Monuments to Our Skies is the result.
Each canvas in this series offers a unique combination of material and form. For Imeh, the choice of materials is crucial, as it influences both the process and the level of control he can exert. He often begins with India and acrylic inks, applying them to the unstretched canvas in dynamically more and less controlled sweeps of color that can be both additive and destructive. He then uses charcoal and graphite, simple yet powerful materials that can tell complex stories and exhibit an immediacy unique to drawing and draftsmanship.
From these materials, Imeh creates both figures and what he calls “bio forms”: structures and patterns from which the figures emerge and into which they can disintegrate. He does not view the bio forms as merely abstract; they are a direct product of his drawing process, making them concrete and real. As he puts it, they are “elements of things that are alive.” Like the bio forms, the figures are more than representations or abstractions; they are ways of seeing the mind at work, a channeling of memory and thought through the actions of the artist’s hand and eye.
Imo Nse Imeh, Journey to the Sun (work in progress), 2024, oil paint, charcoal, India ink, and acrylic ink on canvas, 50 x 72 inches.Imo Nse Imeh, Harvester Angel, 2022, charcoal, India ink, and acrylic ink on canvas, 48 x 48 inches.Imo Nse Imeh, Untitled, 2024, charcoal and acrylic ink on canvas, 36 x 36 inches.
Reindeer!
In Alaska, reindeer are much more real than the fantasy animals that pull Santa’s sleigh. Introduced to Alaska from Siberia by the US government in the 1890s, reindeer were part of a strategy to solve food shortages among the Native peoples after the gold rush. Today, reindeer provide food security and economic opportunities for the Alaskan Native community. Even more so than farming, reindeer herding requires a deep understanding of the needs of Indigenous communities and academic science—as well as how to navigate and influence local, state, and federal policies.
On this episode, host Lisa Margonelli is joined by Jacqueline Hrabok and Bonnie Scheele of the University of Alaska Fairbanks’s High Latitude Range Management program to learn more about the interplay of science, policy, and community in reindeer herding.
This is our final episode of 2024. We’ll be back in late January for an interview with opera singer and actress Renée Fleming and neurology professor Susan Magsamen on the intersection of music, art, and health. And we would love to explore more local science policy issues in our upcoming episodes! Write to us at podcast@issues.org about any policy developments happening near you.
In Alaska, reindeer are much more real than the fantasy animals that pull Santa’s sleigh. They were introduced to Alaska by the US government in 1891 with the aim of providing food security and economic opportunities to the Alaska Native community. Today, herding reindeer requires a deep understanding of Indigenous community needs, academic science, and how to navigate local, state, and federal policies.
I’m Lisa Margonelli, editor-in-chief of Issues. I’m joined by Jackie Hrabok and Bonnie Scheele who run the University of Alaska Fairbanks High Latitude Range Management program. Jackie, Bonnie, I’m so excited to talk to you today. To get us started, could you both tell me a little bit more about yourselves and how you got involved in reindeer?
The work that we do together is about food production, food sovereignty, using the entire animal as a livelihood, which draws in that relationship between science and art, chemistry and art.
Hrabok: My dream was to always go from Canada to Alaska, and that’s where the first experience of being an intern at University of Alaska Fairbanks at their large animal research station in Fairbanks began, where I had the opportunity to work with reindeer and caribou and muskox. And then it progressed to do focus on the health and disease of reindeer with the Swedish University of Agricultural Sciences and their veterinary program and large animal science and veterinary medicine, looking at specifically how reindeer are so important in the livelihoods of these Indigenous Sámi reindeer herders. And that took me to Finland, which continued a really in-depth partnership with providing opportunities through bachelor’s degrees, associate degrees, certificates for students worldwide in the Circumpolar North to learn from each other around the world what reindeer husbandry means to them.
Margonelli: That’s very interesting because you’re sort of touching on how reindeer science is an international thing.
Hrabok: A lot of the work that we do together is about food production, food sovereignty, using the entire animal as a livelihood, which draws in that relationship between science and art, chemistry and art. So reindeer are still very much important in numerous Indigenous communities around the world and in areas where the range management and the size of the area’s natural-like pastures are large enough to support a commercial meat production industry.
Margonelli: Thank you. I want to ask Bonnie. I think you maybe had a different path into reindeer.
Scheele: Yes, absolutely. So my name is Bonnie Scheele. And an introduction in our culture would lead with my name and then I would introduce as far back as I could for my family relations. So I am the fourth generation in my dad’s side of my family for reindeer herding. But I actually found out last year that my great-grandfather was a company herdsman when reindeer herding was set up differently, and his name was Elmer Davis. And so very much likely my grandfather grew up reindeer herding and my dad grew up reindeer herding. My parents both passed away in 2021. And so I inherited the herd from there. And I am involving my kiddos the way that my parents involved me, the way that their parents involved them with reindeer herding. And so it’s a legacy herd that we have and that we’re continuing on.
Bonnie Scheele (left) and Jacqueline Hrabok (right) from University of Alaska Fairbanks’ High Latitude Range Management program.
Margonelli: So Jackie’s talked a little bit about the role of reindeer in food supplies and culture and chemistry and economic. Tell me a little bit about what it feels like to be with the reindeer. What was it when you were a kid that made you feel good about being with the reindeer?
Scheele: So I really appreciate the aspect from Jackie. So our family, we’ve always had a research element involved with our reindeer herding. There were a lot of scientists in my life, and they would always be collecting samples of the reindeer, counting the reindeer, figuring out. But then for my side of it, being a reindeer herder is totally culturally tied. It’s a little bit different from what we’ve done from time immemorial for gathering food; it’s a little more directed in an agricultural sense, but it ties in super closely with how we are as Indigenous people connected to the land. And in being connected to the land, we have the stewardship responsibility for sustainability, because as I mentioned with our legacy herd, it’s our responsibility to pass on these resources to the next generation. So it’s really important to continue and maintain our culture, our identity, with science because we can move forward together with one supporting the other.
Margonelli: Well, that’s really interesting because a lot of times scientists think about doing policy as the scientists travel out from the university. But it seems very much like the scientists and the Indigenous communities and reindeer herders are very much interwoven, and this has taken place over generation after generation. Before we get into the aspects of how this collaboration works, can you tell me just a little bit more about what’s going on with the reindeer? I think there are maybe 20,000 reindeer on the Seward Peninsula and they are all managed by Indigenous family herders.
Scheele: It’s a little bit different structure. And so I’ve been referring back to how prior to the current structure, it was company herdsmen set up. And then in the 1960s, all of that was restructured and the entirety of the Bering Strait region was split up into grazing ranges. And at that point, you had to apply for a grazing range permit. And I’m really thankful and grateful to my grandfather for going and applying for a permit. When he did so, he was able to select the Nome area for our family. And that’s been in our family since he applied for that permit. Other areas of the Bering Strait region all the way down to St. Michael Stebbins, they have different make-ups of how they are composed. So some areas work with their tribal entities and with individual herders. And then quite a few of them are comprised of us individual herders with the families, ideally passing it on to the next family member.
Margonelli: And then some of the policy is set by the Bureau of Land Management?
Rhonda the Reindeer on Midnite Sun Reindeer Ranch. She is about the size of a large sheep.
Scheele: Yeah. So depending on which area your grazing range permit sits is how you would deal with those land managers and the interests that they hold. So there’s Bureau of Land Management, there’s Bureau of Indian Affairs, there’s State of Alaska, and there’s the National Park Services for some of them. And then a huge amount of them is also controlled by the tribal entities for each area. So you actually have to draw a separate contract with those tribal entities also. So there’s a lot of moving parts to this. And there is an entity known as the Alaska Reindeer Council that is composed of all of these state and federal entities that come together and they have a memorandum of understanding to be able to present this unified situation to the herders for all of that understanding. And so whoever would be the largest land manager in your grazing range area would lead your permit application for that with the others following suit with whatever their criteria is for that area.
Margonelli: So just to drill down a little bit further, how big is a reindeer and how much do they need to eat?
Hrabok: The reindeer that we see up in this Bering Strait region, the females, they’re roughly about 250 pounds. The males, if they’re castrated or a bull, well, they can be even close up to about double that.
Margonelli: So at 250 pounds, they’re like the size of a large American sheep or a young cow.
Scheele: I don’t think either of us would know. (laughs)
Margonelli: Okay. All right. Fine. (laughs) So let’s continue on. So during the summer, they eat grass and stuff. And then during the winter, they eat lichen?
Hrabok: Reindeer will selectively feed month by month. So according to when there is a really surge in nutrients from the tundra or from their environment where they’re living, they’re selectively choosing what they need to be in prime fitness. If you were to come here and to watch reindeer graze, this is the way to do it. You just come sit down, get your binoculars, and you just observe the natural behavior of reindeer. Some will select on, in the summertime, leafy greens where’s the most amount of protein coming, even from a leafy willow. So really rich green items. And then they will go on throughout the summer to fall time selectively feeding on mushrooms. Throughout the winter time, what’s available in the winter after all the nutrients have gone back down into the roots? Well, there’s lichen. So reindeer will feed and move in the winter months according to the tundra areas that have richer abundance of lichen species.
Traditional Indigenous reindeer husbandry culture, the reindeer are naturally grazing and are sustaining themselves in their natural environment.
So reindeer are telling you as the herder what is their preferred diet for every month of the year. And as owner, it’s up to you to guide your reindeer. Usually, traditional Indigenous reindeer husbandry culture, the reindeer are naturally grazing and are sustaining themselves in their natural environment. Whether that’s on the tundra in a boreal forest or alpine taiga environment, the goal is to keep your animals in the best condition from natural provided vegetation in that habitat.
Scheele: So in saying that reindeer are only the sustainable livestock agricultural resource for Alaska because there is no extra conditions that we have to create for them. And that’s why our grazing range permits are set the way they are for the optimum performance, because with all of the land managers and based on the science that has come together, we base the numbers for each grazing range on their winter feed, which is lichen. So of that availability, then the number is determined for how many deer can be on that grazing range to prevent overgrazing for lichen.
Margonelli: The reindeer are very different than other range livestock where they might be just eating a few types of foliage. Here they’re choosing. I read somewhere that they can also smell the lichen under the snow. How do they find the lichen under the snow?
Cladonia uncialis, one of the species of lichen eaten by reindeer.
Hrabok: Reindeer have very highly adapt vision. So they do detect ultraviolet radiation, meaning that if you can imagine the landscape is just covered in snow and there might be a little bit of lichen sticking up somewhere that that lichen then is absorbing ultraviolet radiation. And reindeer are able to find it based on very good vision from actually the sides of their head all the way, of course, to the front. They’re able to move quite great distances. They’re able to access the prime quality of food at any given time throughout the month.
The only change where that might not be possible is when you actually have fences that separate reindeer herding cooperative areas. And in that kind of system of raising your reindeer, then the size of your grazing permit is controlling the availability of natural forage in those areas. Then you won’t be able to have such a wide variety of species richness and diversity naturally occurring for your reindeer. But here in Alaska, in the Bering Strait region where reindeer have so much more access to naturally growing vegetation on the tundra, their diet here isn’t restricted based on size of the pasture. So they’re always on the go in search of a lichen corridor, in search of what is best growing food at any point of time that they’re on the go.
Margonelli: Cool. So I wanted to talk a little bit about how scientists and the communities work together. You both work at the High Latitude Range Management Center at the University of Alaska Fairbanks. And maybe you can tell me about what the High Latitude Range Management Center does and how it sort of fits into the sort of reindeer ecosystem, human and reindeer, in Alaska.
Hrabok: So the High Latitude Range Management program, it is an opportunity for students to get an introduction into learning skills that helped to guide reindeer for reindeer health. So range management, herd health management, learning about current policies of working with the government agencies that are all brought together through the Alaska Reindeer Council. The High Latitude Range Management program offers coursework for the occupational endorsement and the certificate program to get hands-on training. You have all these options of really how are reindeer most important to you. And that also varies amongst all these different Indigenous peoples groups around the Circumpolar North and the different students and other entities that we work with through the High Latitude Range Management program.
Students in the High Latitude Range Management Program’s Health Issues in Domesticated Ungulates Class.
Margonelli: So, Bonnie, tell me what you do with the program.
Scheele: I think my most important accomplishment is to be able to really have the herders realize the value of this High Latitude Range Management program as being a tool for their success, for their herds, for their communities. The prior program to High Latitude Range Management was the Reindeer Research Program based out of UAF in Fairbanks. And that was all about every aspect of what a reindeer experiences as a captive deer. And that was invaluable information, but it doesn’t really apply for 99% of what we as Indigenous herders do with our deer on our range. Having that information available through Jackie’s expertise with what she’s studied for her career, it’s completely invaluable when we need to move deer and how to handle them in those captive settings. And so the occupational endorsement that Jackie had mentioned with our program, these courses are set up to the benefit of the herder as their intensives.
Margonelli: So this is people who are really working in the field. They come in and then they have this intensive experience where they’re learning about veterinary science for the reindeer, they’re learning about legal structures, they’re learning about slaughterhouse rules, they’re learning about all kinds of branding, they’re learning about all of these different things at the same time.
It is really developed for the herders so that this information can continue to be passed on in this formalized setting.
Scheele: Well, and all of these are based on everything that was taking place beforehand, and then they were implemented into just a more formalized structure to be passed on to the next generation of herders because there was a gap for our family of our everyday interaction with herding. My parents were really trying to remember all the aspects of reindeer herding. And at the time Dr. Greg Finstad had heard that my dad was taking over the herd, and he approached them and said, “Hey, I have this one-time course for you to participate in. If you guys take part of it, then you’ll be good to go for herding.” And it is really developed for the herders so that this information can continue to be passed on in this formalized setting to where we don’t have to re-figure out again or super-search really hard over and over for all of this information.
So when we go to our older herders—some of our oldest herders are in their 80s all the way down to the youngest herders that we have that are in their single-digit years—you don’t have to go through so much of the information over and over and over with them and wearing them out for all of this information. The core herding management husbandry practice are encapsulated. So now you’re just going over with them their history, what is their history of herding, what worked for them in their specific region. That is super important to know because each region is totally different from each other.
One of the things that we don’t cover in HLRM, but as a herder you have to really cover, is the influence of predators on your herd. And so we have four-legged and we have two-legged predators. So we have bears, wolves, human beings that will poach your deer. Caribou are a predatory to our animals because they will come in and they will take off with your herd, and there’s no recourse for the herders to be able to recover from that. And that’s part of one of the policy things that we’re hoping to change is that the herders have availability to recover from that. We had a huge gap in the 90s where the caribou herd got driven down because of availability of winter feed. They came through the Seward Peninsula and scooped up huge amounts of the reindeer. Some herders are still recovering from that now in 2024. And it shouldn’t be like that. We should be able to go back and be able to get a seed herd and start our herd again.
Margonelli: First of all, it was really interesting to hear how much the community has influenced the class and has influenced the way that science is integrated. And something that I think was mentioned is that there’s a class in writing across contexts that the students take and that is about preparing to engage with scientists.
Now, we have the language. Now, us as herders can know and understand what they’re saying to us in their ever-changing policy lingo.
Scheele: So because of all of the report writing that we as herders have to do to be able to report back to the agencies, it’s super important to know the lingo. So I think this goes back to where you were asking me what do I see my role in this. Well, when my parents were trying to figure out how to handle all of this and what are the rules and how do we engage… and that’s just one aspect of it. That’s not even going out and herding the reindeer, which is a whole ‘nother aspect of it. It’s just like, so we come into this course, and thanks to what it’s boiled down to, and what Jackie can convey to the students, is you do have to deal with the state and federal entities on a quarterly, if not more than quarterly, basis of where you’re letting them know what your needs are and they’re conveying to you what the requirements are for policy.
Now, we have the language. Now, us as herders can know and understand what they’re saying to us in their ever-changing policy lingo, but we can try and keep on top of it to turn around and say, “Actually, because of all the science that has occurred since the 1970s, we have the foundation here to have this request to you to change this policy. It’s backed up.” So without both of those integrations, we wouldn’t be able to go where we’re going. And most of that science is totally backed up by what we as Indigenous herders have observed our reindeer doing out on the land and we bring it to the intention of the researchers and say, “Hey, we need you to quantify this so that we can put it down as data to be used.”
Margonelli: Let’s talk a little bit about how that policy gets made because you’ve got federal policy entities, you’ve got local. And how do you change the policy about chasing into the caribou herd and separating out your reindeer?
Scheele: So like with state and federal issues, one of the biggest things that we as herders want to know, that we ask constantly for, is information on the movement of the caribou herds. And right now, we’ve been working, since the 90s and since the big scoop of the herds come in, they have collared animals in the caribou herds. And so they’re able to keep track of where those herds are. And I think an earlier question or a comment you had made was like, “How many are there?” And both caribou and reindeer have dips in cycles in their numbers. And right now, we’re seeing a really big dip in both numbers. And a lot of that has to deal with the climate change. Unfortunately, we’re being really highly impacted by all of that.
And so the availability of the lichen for the caribou has been driving them into other areas that they wouldn’t normally be in. And so we’ve been told as herders that we would have this information available to us or they would inform us of where the caribou herds are moving. And then we as herders have participated in collar programs also with our reindeer. We know where our reindeer are at. But in relation to where the caribou movement is happening, it’s good to have collars on our deer also so we can see where that happens.
And then it just becomes an issue of making sure that the information is just strictly used for herd management and not for any other purpose. And so we would sign agreements to where that would have happened. And then if we can’t come to an understanding with a state or federal entity, then we would go back to research and we would take a look at what’s occurred in the past and the movements of those animals and then point out and say, “Well, in this process, we would like to include in the next policy change, the reason for where we would like to have this information access to it.”
If there is new policy under discussion, then we all discuss it so that everyone uses their expertise in the group, and we bring that back then into the classroom.
Hrabok: So each year, there is the annual Alaska Reindeer Council Meeting. It’s a gathering that it’s usually in early spring or late winter, and it gathers the representatives of the various government agencies, plus staff and faculty and students from this High Latitude Range Management program, Bonnie and myself. And together, we present annual report of what government entity has… What sort of research projects they might have done. That is shared amongst everybody. If there is new policy under discussion, then we all discuss it so that everyone uses their expertise in the group, and we bring that back then into the classroom.
Now, more than half my life, I have been involved directly, it’s like my livelihood, as the researcher, as the professional student, as that lead faculty of developing curricula with government agencies, working with the herders, what are some of the challenges? How can technology be incorporated into the classroom? How can grant writing skills be incorporated into our coursework to train the next generation in the communities to be able to seek out and apply for their own funding for their specific needs?
So some of the skills that I have developed over roughly 20 years, they’re coming from the Indigenous Sámi reindeer husbandry range. And that’s my connection bringing them from Norway, Sweden, Finland. So there is the Sámi Education Institute in Inari, Finland, and they also have northern campuses in Toivoniemi. And from that partnership with University of Alaska Fairbanks, we’re able to have coursework that is all created through the needs of students from the policy makers, from the researchers, from the senior herders themselves, the parents of the students that are 20, 30, 40, 50, 60-year olds. And the whole part of this is feeding yourself, clothing yourself, and doing those same things for others, and bringing in an income so that you can buy those other things that you need to be content, but maybe more importantly is also thriving and having that passion, you know what you love to do, and by having some funding come in that it’s allowing you to do more of it.
And one of those areas that we see is very important all around the Circumpolar North is handicraft production. There are so many people in each of the community that have these absolute master craft skill levels, and whether it’s hand stitching, carving, tanning leathers and furs, and making jewelry that is bringing in this additional income and using your passion to share it amongst your community members, and also helping to get some other grants that combine science with art. I thought that was important just to bring up that connection of all these things we’re talking about, reindeer, reindeer, reindeer, reindeer, but it’s people and the interaction between the people and everything, everyone that you live with to thrive.
Margonelli: That’s really beautiful. That oftentimes gets left out of science policy because that is after all the point.
Hrabok: Because my family weren’t reindeer herders, I started off as being the… “Bachelor’s degree! I need to challenge more. I’m going to be a mad scientist.” That was one of my dreams as a 16, 18-year-old. And then it switched, “I’m going to get my PhD.” But why? Where does that come from? I want to be the first person in the family to get that higher education. And then the huge change point for me, it’s where all the research was done and the direct experience of these research projects that do occur on Indigenous lands, that there has been lack of communication between those that are creating the research on the Indigenous reindeer research range, like the lands of the Sámi or the Yupik, Iñupiat, St. Lawrence Island Yupik, those areas.
You can’t know what it’s really like if you’re not there as your life daily with the people and animals in those communities that you’re trying to create policy for.
And that’s where instead of being just the student studying, research, research, then it became I married into it. And now the shift is no longer research, research, “What else can we find or do?” Now, it’s like, “Wait a minute, what do the people who are living with reindeer as their livelihood? Is there something there that could be partnered with us, the researchers?” And that’s where the main change of my life occurred back… I can’t remember how old I was, early 30s.
And by living in these small rural communities where you can only fly in, there’s such a difference of being the researcher where you get to fly in four times per year for a week versus living there and experiencing, “We haven’t had a plane with food on it for four days.” That’s your guarantee. You as a community member, you are going to practice subsistence hunting and berry picking and mushroom picking, and you are learning to live from the land with everyone else that you live with in the community. And until I think a researcher really gets that experience, you don’t know. You can’t know what it’s really like if you’re not there as your life daily with the people and animals in those communities that you’re trying to create policy for.
So there’s such goodness amongst people and this open communication, these conferences, and these annual meetings. I have seen through life of how important it is to have representation from all of the people where this is their livelihood. And when you incorporate all the youth and those teenagers that might not really want to right be there then because something else is going on, you bring them. You bring the elders always into the classroom because without the elder, the researcher doesn’t know what already has been known in that area. And that is the goodness, that is the passion of that connection between the research, the community members, and the land mammals that we have all been speaking about between policy and higher education and workforce development.
Scheele: We as reindeer herders, we do not get subsidies that other livestock producers receive. And so everything that we’ve done is completely out of pocket. There are some grants that cover some of our activities, but they can be really difficult. And I do understand there are difficulties in other aspects of livestock agriculture within the US. But for the availability of funds that go around, we haven’t been qualified to be able to utilize those funds. So we have been working with the Alaska representation—senators and the House representatives—and with the tribal relations out of the USDA to include, in the Tribal Food Bill for the Farm Bill, reindeer as an exempt status so that we would have a commercially graded product to be able to sell. So that would ensure our food security and our food sovereignty for our regions. And we were this close to see it go through.
With all of those aspects changing, our main food sources are being decimated by this and so we want to be able to offer reindeer as another aspect of here’s food security, here’s food sovereignty.
And so now with all of the change that’s occurring, we’re going to start again. But with Indigenous people, you’ll find that we’ve been doing this for time immemorial of re-educating and starting over and making sure that it happens. So we’ll just go at it at another angle and another tactic to be recognized that we have sovereignty over our food. And part of the reason for the importance of this is that everything that we do in Alaska is not the same as the lower 48 for food security. We are subsistence-based culture here. And so our salmon are affected, our sea mammals are affected. And then one of the few things that we can potentially have more control over is reindeer. Reindeer can be a food source. Birds are affected by all the things that are changing over the last 100 years with climate. And so with all of those aspects changing, our main food sources are being decimated by this and so we want to be able to offer reindeer as another aspect of here’s food security, here’s food sovereignty.
We’re looking at anywhere between seven and $25 a gallon for fuel out here. And having access to fuel affects everything in your daily activities. And when you’re paying that much for fuel, then guess what? You’re also paying for that much for shipping and trying to get things up here. And with Alaska, with how much food we bring into the state, we’re totally reliant on what Jackie was saying. If the plane don’t show up, because we’re not on a road system for most of our communities, you won’t have milk, eggs, chicken, pork, or beef for weeks on end. And even when they do come in, you go and look at a package of chicken that has two chicken breasts and you’re paying $32, $45 for that package of food. And I’ve got how many kids or how many family members in my home to feed? But when you know that there is somebody who’s in your region, who has a herd of reindeer available and they’re working hard to make sure that that food’s available to you and your family, great.
And then on top of that, like Jackie was saying, you’re going out and you’re picking the berries. You’re out there doing the subsistence work of getting the birds, the geese, the ducks. You’re looking for caribou. You are going out and finding ptarmigan. You’re making sure that you’ve got your moose tag for that year. And if you’re lucky enough, you’re also making sure you got a muskox tag for that year to be able to put this stuff away for the winter.
But then the other element to that is all these communities run on generation. And so when you have a power infrastructure that is 50 years old and you’re experiencing power outages on a weekly basis, now all of a sudden your freezer full of food is out. And then in the past, we as Indigenous people have really relied on making food caches in the tundra to keep our food frozen. Well, we can’t even do that anymore because now the ground’s warming up and it won’t stay colder at freezing temps for a food cache. So it’s just really important that we overcome these challenges in a way that benefits our communities and what works for them as we keep moving forward. We’ll keep persevering and we’ll make a way to make it happen.
Currently, for commercial product, we have to have 32 degrees and colder outside—snow on the ground—and the deer have to be frozen and quartered. And as a herder, when you’re limited to that, then your only price per pound is so low when your fuel costs are so high. So you can’t even pay the people who are helping you make this food available. And so one of the things that we would like to see is to be able to… With Jackie’s help, we’ve known all this. But with the researchers, with everything that’s occurred, is we know we can get a really good price for USDA prime cuts. We know we can sell that and then make money off of that to be able to afford the fuel, to pay the people, and to create a reindeer economy. So it’s not super bleak. I mean, we’ve been doing this from the 1890s. We’ll keep doing it and we’ll keep continuing on, but this is our livelihood. We didn’t get into it to get rich quick. We’re doing these legacy herds to continue on because it’s so close to what we live every day in our Indigenous lives. This is who we are. This is where we come from. And so just it matches so closely and we’re going to persevere.
Margonelli: That’s wonderful. This has been an amazing conversation. I just want to quickly make sure that we get one thing from Bonnie about how the reindeer is one of the few areas of reparations. So do you want to just talk about that really briefly?
That whole action that started with Captain Michael Healy was an aspect of reparations from the US government even before Alaska was a state. And it still continues today, and we’re still utilizing it.
Scheele: Yeah. Sure. So earlier, we were talking about how when the reindeer were brought over, there was Captain Michael Healy who had noted that during the gold rush and the whaling periods that were taking place in Alaska with a huge influx of people in these regions of Alaska, the subsistence lifestyle wasn’t able to sustain the Indigenous populations anymore. And so he went and approached Reverend Sheldon Jackson at the time in the 1890s and said, “I think introducing reindeer to the Indigenous populations of these regions of Alaska would be really beneficial.” And when that occurred, when Sheldon Jackson went to Congress and appropriated funds, they were able to bring the reindeer over from Chukotka Peninsula, from Russia, and then they set up these—umbrellad under Bureau of Indian Affairs—these reindeer stations. And then there was a mentorship program that occurred between the Indigenous and BIA. So then the Indigenous herders would be able to take over these herds and use them for their own purposes, for their communities.
That whole action that started with Captain Michael Healy was an aspect of reparations from the US government even before Alaska was a state. And it still continues today, and we’re still utilizing it. We’re very protective of the fact that we as Indigenous herders have this resource and to manage it sustainably for our future generations.
Margonelli: Thank you both. It has really been a pleasure talking to you. It’s also been really inspiring. I mean, we spend all of our time talking to people about how to create better policy between scientists and communities and really to create a better life. Issues has a mission statement about how can science be used to create a better life for more people. And what you’re talking about, what you’re doing, what you’re living is a really amazing expression of that. And thank you very much.
Scheele: Yeah. Thank you for having us. We’re really thankful to be able to share what we do with reindeer husbandry.
Hrabok: It’s important that people do share and visit when possible also like face-to-face to come see the animals and the communities and the students on the land because you can see. You know the people in your community, and you know there’s someone there that is really maybe good with gaming and someone’s really good with the local language. So maybe you can teach through what is the popular way currently to meet the needs of the communities. So we love what we do. It needs to be fun, lots of learning, lots of sharing, and you get to work with everybody. There is important to have that unity and to have these discussions so that policy is made to keep in line with where everyone wants to go. You can’t please everybody, but at least everybody is considered in the next step of the direction of where things might be going.
Margonelli: To learn more about reindeer herding and the interplay of science, policy, and community at the local level, visit our show notes.
Please subscribe to the Ongoing Transformation wherever you get your podcasts. And write to us at podcast@issues.org about other topics you’d like to explore. What exciting policy developments are happening in your hometown? And they don’t even have to be exciting. We’ll still be interested.
Thanks to our podcast producer, Kimberly Quach, and our audio engineer, Shannon Lynch. I’m Lisa Margonelli, editor-in-chief at Issues. This is our last episode for the year. But we will be back in late January with an interview with Renée Fleming and Susan Magsamen on the intersection of music, art, and health.
Innovations for Medical Progress
In “Medicine Means More Than Molecules” (Issues, Summer 2024), Sindy Escobar Alvarez and Sam Gill argue for the need to broaden investments in science beyond biomedical research. This is not just a matter of debate. It’s an urgent call to action that demands our immediate attention.
Over 35 years ago, Congress recognized that while increasing our understanding of the processes that lead to disease is essential for medical progress, it is wholly insufficient. Medical progress requires scientific advances along three intersecting paths: biomedical research to understand the physiological underpinnings of the disease process, technology assessment that allows for new technologies and materials to be introduced into existing treatment protocols, and health services research in real-world settings that recognize that care innovations require external validity and not just a study’s internal validity.
These last two paths were why Congress elevated the National Center for Health Services Research and Technology Assessment to agency status, creating what is today known as the Agency for Healthcare Research and Quality. AHRQ’s unique scientific mission is to improve the health care quality available to everyone by producing scientific evidence, synthesizing the body of evidence for improving care quality, and disseminating resources, tools, and guidance for implementing innovations in our diverse health care delivery systems.
At the forum that Alvarez and Gill mention, I asked the attendees to raise their hands if they had developed an innovation in their careers to improve patient care. Almost everyone raised their hand. I then asked how many of them had that innovation adopted by the health care delivery system where they worked. There seemed general surprise when no one raised their hand. However, this stark response clearly reflects the underinvestment in health services research and technology assessment to improve care and the preference noted by the authors for funding “molecular research.”
With the right investments, we can reverse this trend and see the medical progress the nation deserves. We have the potential to benefit greatly from the interplay between the three scientific paths that lead to medical progress, offering a promising future for health care.
We have the potential to benefit greatly from the interplay between the three scientific paths that lead to medical progress, offering a promising future for health care.
We need to reverse this tide of misplaced funding, replacing the incentives that reward investigators for the number of publications produced and grants they bring to their institutions. In many settings, individuals are expected to fund their salaries this way rather than rewarding scholars for “creative works” that could include implementing care improvements and outcomes derived from the science they produce.
To be sure, research and development is needed to transform our health care systems, deliver care differently, and meet the demands of an aging and more diverse society. No one would argue otherwise. To that end, AHRQ invests in new training approaches that embed investigators in care delivery systems to improve care, not just publish journal articles.
Improving health care quality in the United States is not just a goal; it’s a collective responsibility that we all share. It means investing in making health care safer and more effective and ensuring that it’s delivered more efficiently and equitably. It must also be patient-focused, ensuring that the consumer experience meets their emotional and physical needs and raises their well-being.
Robert Otto Valdez
Director, Agency for Healthcare Research and Quality
US Department of Health and Human Services
Lessons From Baltimore for Participatory Research
In June, a research article in Nature Neuroscience started to tease out why African Americans are more likely to get Alzheimer’s disease and strokes and less likely to get Parkinson’s disease. Using brains donated from more than 150 deceased Black Americans, researchers quantified each individual’s proportion of European and African ancestry and correlated that with levels of gene expression. The study found ancestry was associated with different activity levels for genes related to immune response and blood vessels, but not for those related to neurons. The study also set the stage for exploring how different environmental factors—such as stress or pollution—can influence neurological health by driving changes to DNA that affect gene expression, part of the science of epigenetics. It is the first publication of the African Ancestry Neuroscience Research Initiative (AANRI), which I cofounded.
This is a landmark study in part because most biobanks, which compile biological samples for use in biomedical research, and analyses have focused overwhelmingly on people of European descent. As I wrote in an editorial accompanying the scientific paper, many large genome-wide association studies probing schizophrenia, Alzheimer’s, autism, Parkinson’s, and depression that included hundreds of thousands—or even millions—of subjects have failed to include a single Black person. (Indeed, many research analyses actively exclude data from minority populations for statistical convenience.) There’s also a disparity in who does the research. Although 14% of the US population is Black, our representation is much lower among scientists. From 2010 to 2021, about 3% of research project grants awarded by the US National Institutes of Health (NIH) went to Black scientists.
Many large genome-wide association studies probing schizophrenia, Alzheimer’s, autism, Parkinson’s, and depression that included hundreds of thousands—or even millions—of subjects have failed to include a single Black person.
In light of this, I want to point out that the establishment of AANRI signifies another, harder to recognize achievement: inclusive, participatory research. It is rare that the gowns of research universities truly engage with the towns they inhabit, but this initiative was born of an exceptionally tight relationship. It encompasses efforts from Baltimore community leaders, Morgan State University (the largest historically Black research university in Maryland), and the nonprofit Lieber Institute for Brain Development. And while there is still much more work to be done, I am proud to say that everything under the hood—from fundraising for the research to further building up Lieber’s biobank and sharing results—has been achieved with respect, representation, and a comprehensive definition of community.
Here, I’d like to share my thoughts on what enabled this deep level of engagement, and what could foster it more generally. I will draw my account from my own experience but want to be clear that many people lent their expertise and passion to the research initiative, each with their own unique and essential contributions. The incomplete list includes Lieber director and CEO Danny Weinberger, Lieber scientific project manager Gwenaëlle Thomas, and advisor Kafui Dzirasa, who studies neuropsychiatric risk genes at Duke University.
A long beginning
The African Ancestry Neuroscience Research Initiative officially launched in 2019, but the seed sprouted when I happened to meet Danny Weinberger, head of the Lieber Institute, a couple years prior. I was serving on the board of a local development project with the Harbor Bank of Maryland that made loans to projects deemed to bring value to the community. One such project we funded was a complex at 855 North Wolfe Street in East Baltimore. I decided to visit and see it for myself. That’s when I realized that the anchor tenant was not a grocery store or even a business, but a nonprofit research institute. I wanted to learn more.
This brought together many of my own interests: I was already a long-term participant in an NIH study on aging, in part because I appreciated the intense medical workups, and in part because I knew African Americans were underrepresented in research studies. I’ve been serving in Black churches since my earliest youth. Back in the late 1970s, I was part of a consortium of faith-based communities, Baltimoreans United in Leadership Development, or BUILD. For me, participating reinforced the importance of faith-based organizations and community organizing. In the 1980s, I became the executive director of the Southeast Vicariate Cluster of Churches in Anacostia in Washington, DC, where I explored how faith-based communities could play a role in health care delivery. I’d worked with seniors in health care facilities who didn’t have anyone to look out for them, and helped create an ombudsman program between churches and hospitals. At the time, the Southeast Cluster had funds from the Robert Wood Johnson Foundation to train people to look for warnings of ill health in frail and isolated members of their community. It was my first experience of a research grant. But even before that, I had been sensitive to issues of quality of care, as my family sought programs to help my older brother and soulmate, Charles, who would now be recognized as extremely autistic. He had a network of services available—but only if one knew how to seek them out.
Weinberger told me they had biological material to study neurological diseases in African Americans specifically, but said there was no funding to carry it out. That fact stuck with me.
Baltimore itself has a particular relationship to the medical research establishment that goes back generations and includes what might be described as both the best and the worst. In the realm of the best, consider neurosurgeon Ben Carson, famous for separating twin babies joined at the head, and cardiology pioneer Levi Watkins, who was the first to implant an automatic defibrillator. Both worked at Johns Hopkins University and helped form the many ties between that institution and the wider community. But Johns Hopkins is also where Henrietta Lacks sought treatment, and where her cells became part of globalized research without consent or compensation. And the memory of the community goes back further: one of my congregants had an uncle who was part of the infamous Tuskegee syphilis study.
This history meant that when I met Weinberger, I was ready to speak the researcher’s language—but I was not going in with rose-colored glasses. I soon learned about the care and sensitivity Lieber staff took when obtaining human remains to get consent from next of kin, working across four medical examiners’ offices. I learned that they had thousands of frozen brains donated by individuals and family members who hoped their bodies could aid science and lead to improved care. This had been on my mind when Charles passed in 2012. Weinberger told me they had biological material to study neurological diseases in African Americans specifically, but said there was no funding to carry it out. That fact stuck with me.
I had friends in state and national government and was able to lend my influence to make sure African Americans were included as genomic mapping projects were deliberately extended beyond those of European descent. I was also able to make a case to the Abell Foundation, dedicated to improving health, economic, and educational outcomes in Baltimore City, that they should help to finally unlock insights from the African American brains housed in Lieber’s biobank. More donations followed their lead.
Community and credibility
But it takes more than funds to create a true community project. So often community is overly circumscribed: perhaps a researcher considers only his or her lab or the larger university. Having been in ministry nearly three decades, I am naturally inclined to think of my community as my congregation and their families. (Congregants say I either hatch them, match them, or dispatch them.) But the AANRI went beyond Lieber and my congregation: the inclusion of Morgan State University brought insight, power, and credibility. Dedicated funding now supports collaborations between Leiber personnel and Morgan faculty, plus student internships and faculty training at the Lieber Institute.
Lieber leadership also brought in Gwenaëlle Thomas (who’d earned her PhD with Kafui Dzirasa at Duke) as scientific project manager to cement Lieber’s commitment to community and cross-institutional collaboration. This helped secure hands-on research programs for Sollers Point Technical High School outside of Baltimore and enabled the development of an applied neuroscience graduate program at Morgan State, where Thomas and other Lieber Institute colleagues also teach neuroscience courses.
If I am looking for participatory research, I am first going to identify the anchor institutions.
When I see a community, I see it through the lens of anchor institutions. These are often faith-based institutions or local universities. They are populated by people, of course, but they have intergenerational stories. My mother’s youngest sister graduated from Morgan State University in 1944, and she told me about the protests students held for equality in the parking lot—long before the more famous civil rights demonstrations in the 1950s and ’60s at the same school. These institutions have long-standing narratives—in the case of Morgan State, three generations in just my own family—meaning they have both credibility with their communities and a responsibility to them. If I am looking for participatory research, I am first going to identify the anchor institutions.
Even with my long experience in community organizing, perhaps the most surprising thing I’ve realized is how important it is to push yourself for an expansive definition of community, because essential members of your community may not be the people that you see every day. If you neglect to do this, it’s like playing a game of chess without putting all the pieces on the board. Done well, it’s a vibrant ecosystem where well-allocated talents yield more and more, constantly seeding mutual interests.
Lessons expand
Once the community is acknowledged, four principles come into play. I call them the four Rs: recognition, respect, relationships, and results. I first articulated these in a book chapter on community planning and resilience in 2004, and my experiences since have only confirmed their importance.
By recognition, I mean realizing that there are subject-matter experts within your community, and that those subjects can vary greatly—as can the formality of their training. One of my congregants alerted me to the NIH aging study and how important it was that data on African Americans were included. Consequently, my wife and I enrolled and participated for over a decade. The congregant had been participating for many, many years before I ever knew about it. It sounds trite, but just getting to know people better can put the expertise you need in your path. This has happened to me time and again. But recognition means more than seeing the expertise within people; it also means having a sense of who people are, what’s important to them, what they struggle with, where they find joy. So often in research, academics consider just the neighborhood where an individual lives; they don’t probe all the connections—to sororities, say, or churches, or other groups. Community organizers know to look at all the connective veins that reveal people’s interests.
Recognition is closely related to respectfor expertise and how that expertise can lead to the success of an enterprise. Efforts prove more effective when the assumption is that people are competent, their ideas relevant, and both their needs and values worth meeting. The requirement for respect may be the most obvious in failures: a research study gets no volunteers because enrollment sites are inconvenient and the premise irrelevant to a community. Respect also means understanding different stakeholders’ pressures and realities. Patients might get impatient with the slow pace and ambiguity of research. Initially, Weinberger thought his job was to assure community members that he was doing good science, but he realized he needed to persuade them of something more visceral: that he wasn’t out to make anyone look bad. Community members also need assurances that the time and insight they share will not add too much burden to their already overfull lives, and that their contributions will improve a community that they care about.
Once the community is acknowledged, four principles come into play. I call them the four Rs: recognition, respect, relationships, and results.
Underpinning both respect and recognition are relationships. Too often, potential collaborators go straight to the task and overlook the person. But skipping over the relationship weakens collaborators’ ability to focus on the task. That doesn’t mean collaborators should be best friends and confidants. It does mean that the relationship needs to be more than transactional while also being mutually beneficial. If we don’t take the time to understand all the connections of people coming into research, then there is no scope for mutual self-interest, for selves intersecting. And then the relationship becomes limited, shallow, and fragile, reduced to transactions. When a deeper relationship exists, it is easier to move beyond transactions, because each interaction is just one of many events in something that is understood to be ongoing.
My relationship with Bob Embery, head of the Abell Foundation, is an example. When he decided to put in seed funding for AANRI’s project, it was a risky move for his philanthropy, which traditionally focuses on community development. But we already had several points of connection. We graduated from the same high school, many years apart, and knew of each other’s long histories of community work through various organizations. We had reached the point where we would bounce ideas off the other informally, so when I approached him about AANRI, he was ready to both share his skepticism and trust my instincts. He knew that our relationship benefited each of us, was based on shared goals of improving the quality of life for citizens of Baltimore, and would endure long past any single funding event—in part because we each knew how to say “no” to the other. Our relationship is not a one-off series of transactions; it is an ongoing exchange, the majority of which (including AANRI) has paid off in real results toward our shared goals.
The fourth R is results, because something should come of relationships, of people working together. I have compared achieving results to rounds of three-dimensional chess. The philanthropic and academic sectors often exist as silos. As a community organizer, I think about merging silos and making sure that everyone is playing the same game. The question is how to get people to agree to a particular course of action, and determine what is needed to catalyze it. Once people are working together, I busy myself communicating with many different stakeholders.
It’s crucial to understand that none of these principles can be achieved overnight. The process can be recursive, with results amplifying as projects continue over the years and expand to other communities. And that’s what inclusive research is asking for. With enough communication and commitment, competing interests are brought to heel, mutual interests flourish, and broad benefits abound.
When Oil and Gas Companies Go to School
Funding for university research programs by the oil and gas (O&G) industry is increasingly controversial in the United States. A 2023 Guardian article, for instance, described student “unease” upon learning that an Exxon employee maintained an office and teaching responsibilities at Princeton University. A 2024 bicameral congressional report, Denial, Disinformation, and Doublespeak: Big Oil’s Evolving Efforts to Avoid Accountability for Climate Change, found that more than 80 US universities receive funding from O&G corporations, which sometimes includes the payment of hundreds of millions of dollars and can go on for decades.
In light of such investigations, many students and administrators have concluded that all engagement with O&G companies should be off limits to academic researchers. Some academic institutions have not only stopped accepting research funding; they have also divested from any financial engagement with O&G companies, including withdrawing their endowment investments. While I am firmly in agreement that we need to take decisive, rapid action to address climate change, I argue that there is no good one-size-fits-all answer to the question of whether accepting O&G funding is ethical or compatible with a university’s objectives. Instead, I propose a framework that schools can use to evaluate potential research funding relationships with O&G companies in light of their own values.
I argue that there is no good one-size-fits-all answer to the question of whether accepting O&G funding is ethical or compatible with a university’s objectives. Instead, I propose a framework that schools can use to evaluate potential research funding relationships with O&G companies in light of their own values.
Researchers, administrators, and the broader academic community are best served by a practical approach to the question of engaging with corporations in the O&G business—one that appreciates the ubiquitous role of fossil fuels in the modern world in providing low-cost energy and transportation, the urgency in addressing climate change, and the variety of ways that labs and O&G companies can engage, as well as the many types of research that are possible. It’s also important to recognize the many important societal values that intersect around the O&G industry—including equity, climate change mitigation, energy availability and cost, societal resilience, and energy security.
Accepting research funding from O&G companies involves making complex trade-offs among those societal values. For example, research dedicated to helping O&G companies increase the supply of fossil fuel resources is the most controversial kind of engagement, but many people would argue that fossil fuels are an inexpensive and ubiquitous source of energy—particularly in regions where fossil fuel is extracted—and is a necessary part of an overall energy solution. Moreover, the expertise associated with fossil fuel extraction is increasingly being deployed for decarbonization efforts, such as carbon storage or geothermal energy.
Similarly, O&G companies are also investing in university research and development in areas like green hydrogen or sustainable aviation fuel production, again leveraging the deep expertise of the O&G industry in fuels processing, but toward a greener energy transition. For some observers, these societal costs and benefits warrant carefully designed partnerships, but others argue that any engagement with the O&G industry is inherently unethical because of the role these companies have played in climate change.
Naturally, different researchers engage with these different points of energy availability, resilience, and security in different ways. Moreover, while science can answer questions about the climate effects of putting more carbon into the atmosphere, it cannot quantify how to weight the important societal values of climate change mitigation, job stability, or energy security. Distinct academic communities, emphasizing different values, will understandably reach different conclusions about what kinds of relationships between the university and O&G industry are worthwhile or justified. The important thing is that these communities carefully examine the ramifications of those conclusions and proceed intentionally and thoughtfully.
Inherent in these considerations is that the various forms of engagement with the O&G industry can be radically different, and it is not helpful to paint all O&G engagements with the same brush. To help researchers, students, and administrators explore these funding relationships systematically, I have composed the table at left depicting a range of engagement points based on the funding purpose.
Although the trade-offs among items 1–4 are relatively straightforward, items 5 and 6 introduce additional conflict-of-interest considerations, particularly in the area of making policy recommendations, which could include government regulations or use of government funds. Managing such possible conflicts of interest is very difficult and may be impossible in some cases. And then there are more general issues of companies’ building goodwill with a community, which are covered in item 7. Here, a company may support a student, faculty member, or leadership position in a university, in much the same way as companies provide grants to a broad set of societal organizations (such as Little League Baseball) that have little relationship with their core business. A university may not want to approve such corporate relationships with its community.
By clarifying the different kinds of relationships and purposes of the research, academic communities can look more carefully and specifically at individual projects and their potential contributions. They can also discuss the dangers such projects raise for conflicts of interest. For example, funding for research aimed at making the extraction of oil cheaper or faster will be controversial in many university departments, while funding for research devoted to the development of sustainable carbon-free alternative energy technologies will be less so.
Determining the purpose of an engagement—from both the university’s point of view and industry’s—will not necessarily settle the question of whether to embrace a project or not, but it can serve as the foundation from which to launch a constructive debate.
Determining the purpose of an engagement—from both the university’s point of view and industry’s—will not necessarily settle the question of whether to embrace a project or not, but it can serve as the foundation from which to launch a constructive debate. There may even be cases where industry collaboration is essential for research progress because O&G companies hold the only data and expertise needed to even conduct the research. There may be others where the conflict of interest—or perceived conflict—is so great it would disqualify almost any research.
The stakes could hardly be higher. It is imperative that society moves quickly to reduce its climate impacts, but also to support marginalized communities and avoid economic or social disruption. The university research community plays a central role in that process, but it must adhere to high standards. Fostering nuanced thinking and transparent disclosure processes can help balance the pursuit of knowledge and innovation with the imperative to promote sustainability, equity, and economic opportunities for all.
Making Impact Count
In “What We Talk About When We Talk About Impact” (Issues, Summer 2024), David H. Guston discusses the challenges in defining, measuring, capturing, and demonstrating the impacts associated with research and scholarly activities at institutions of higher education. After highlighting numerous efforts aimed at broadening socially impactful research, he concludes that much more needs to be done to expand the institutions’ reward and incentive systems to encompass these varied forms of impact.
At the US National Science Foundation, we are pleased to contribute to this transformational change through a range of new initiatives, the most significant being the establishment of a new directorate—the agency’s first in more than three decades. The Technology, Innovation and Partnerships (TIP) directorate, which was authorized by Congress in the CHIPS and Science Act of 2022, aims to accelerate technology development and translation, grow a new American innovation base for the mid-twenty-first century, and nurture a workforce of researchers, practitioners, technicians, entrepreneurs, and educators across all fields of science, technology, and engineering. TIP was specifically established to help reestablish the nation’s standing in key technology areas for decades to come.
These initiatives all require educational institutions and others to go beyond historic measures of impact.
To achieve this mission, TIP is both enhancing and scaling existing programs—such as the NSF Small Business Innovation Research (SBIR)/Small Business Technology Transfer (STTR), Innovation Corps (I-Corps), and Convergence Accelerator programs—and initiating new activities designed to support capacity-building, use-inspired and translational research, economic growth and job creation, and practical experiences to prepare all Americans for these jobs. For example, the Accelerating Research Translation (ART) program specifically targets building and strengthening the underlying infrastructure for technology transfer at institutions of higher learning, seeking to catalyze a culture devoted to creating and enhancing economic and societal impacts. It requires a diverse set of stakeholders, including senior leadership, technology transfer offices, faculty, industry, nonprofits, and investors, to work together.
The NSF Regional Innovation Engines program similarly encourages cross-sector partnerships to harness a region’s unique strengths and ultimately position it as a national and world leader in one or more key technology areas. And the NSF Experiential Learning in Emerging and Novel Technologies (ExLENT) program invests in regional cohorts of internships and apprenticeships.
These initiatives all require educational institutions and others to go beyond historic measures of impact, notably papers and conference proceedings, and take into account a range of practical quantitative and qualitative data such as invention disclosures, patents, licenses, revenues, start-ups established and acquired, talent trained in degree and certificate programs, and so on.
At a moment of intense global competition, the United States faces consequential decisions that will shape the evolution of its innovation enterprise—the envy of the world. We must continue to lead in curiosity-driven, foundational science, but we must also accelerate use-inspired and translational research. To do this well, we must promote a culture at higher-education institutions and other research organizations that not only acknowledges and rewards historic measures of success, but goes much further in welcoming tangible solutions for pressing real-world challenges in communities across the nation.
Erwin Gianchandani
Assistant Director for Technology, Innovation and Partnerships
US National Science Foundation
David H. Guston offers valuable contributions to addressing two essential questions: How should we evaluate the societal impact of research? And how should we build impact into the mission and work of research institutions to maximize their contributions to the public?
These two questions are top of mind for public and philanthropic funders. At The Pew Charitable Trusts, we convene the Transforming Evidence Funders Network (TEFN), a diverse collective dedicated to closing the gap between research and societal outcomes. Like universities, funders are increasingly motivated to demonstrate the public impact of their research investments. Many are gravitating toward approaches that explicitly foster dialogue and partnerships among researchers, community groups, service providers, or policymakers, drawing on evidence that these stronger connections can increase the chances of more expansive and equitable public impact. Some funders invest in engaged research to make creating knowledge more participatory. Others are bridging divides between researchers and decisionmakers through training and policy appointments or resourcing organizations and individuals who work where research institutions, communities, and governments intersect. Each of these investments aims to produce societal benefits while also developing useful knowledge for scholars.
Like universities, funders are increasingly motivated to demonstrate the public impact of their research investments.
While much progress has been made in understanding and measuring the societal benefits of research, refining and applying these measures requires dedicated and ongoing attention. As Guston notes, the path from ideas to impact, and from impact to outcomes, is complex, involving many actors with competing priorities. Measurement must reflect this complexity, holistically recognizing the many ways that knowledge influences the world. To that end, the William T. Grant Foundation, which is part of the TEFN collective, has proposed a framework for understanding the conditions that lead to the use of research, including infrastructure to sustain cross-organizational relationships, mechanisms for knowledge exchange, and trust. One possible evaluative tool would be to assess the degree to which research fosters these conditions, recognizing that using research in nonacademic settings is essential to achieving public impact.
Beyond measurement, expanded incentives and institutional infrastructure are needed for universities to holistically support public-facing scholarship. As our recent white paper showed, Guston’s university is not alone in thinking deeply about evaluating and rewarding impact. Penn State, for example, recently launched a presidential strategic initiative that includes financial awards for outstanding community-engaged research. Duke revised its tenure standards in 2018 to include a provision on public scholarship, offering guidance for faculty review committees to assess research contributions in the policy or public sphere.
These initiatives illustrate the groundswell of efforts to better recognize an expanded spectrum of research contributions, beyond contributions to academia. To build on this progress, the leaders of Penn State and Duke, along with more than a dozen of their peers, have joined a new effort facilitated by Pew: the Presidents and Chancellors Council on Public Impact Research. This Council will, over the next two years, highlight and strengthen efforts to support university infrastructure for public-facing research and demonstrate the many pathways by which research can shape positive outcomes for people’s lives and communities.
Guston’s article, like our work with funders and university leaders, shows the need for additional innovation and experimentation in encouraging research impact. It is vital that we understand what works, so that efforts at institutional change are grounded in rigorous evaluation. By testing promising approaches, including those that Guston outlines, we can design new paradigms in assessment, incentives, and institutional infrastructure that help our research ecosystem rise to its public purpose and address the most pressing issues of our time.
Angela Bednarek
Project Director, Evidence Project
Benjamin Olneck-Brown
Officer, Evidence Project
The Pew Charitable Trusts
While assessing the societal impact of research is a complex and nuanced endeavor, creating an environment to enable this work is equally challenging. Strengthening the capacity for research translation within institutions of higher education (IHEs) requires a holistic approach that addresses several interrelated components. In particular, IHEs will need to develop researcher capacity, bolster their infrastructure, and foster a vibrant research translation culture.
Researcher capacity. A fundamental step in enhancing research translation is to invest in developing the capacity of researchers to engage in translation activities. Workshops, mentorship opportunities, interdisciplinary training initiatives, and other professional development programs can help foster these essential skills. Decades ago, institutions began implementing programs designed to provide basic teaching instruction to PhD students. Today, academic leaders across disciplines should similarly augment doctoral program experiences to include formal instruction on science communication, policy engagement, or related research translation knowledge and skills. Beyond these general supports, IHEs should cultivate innovation-specific capacity for researchers. Institutions that provide access to strategic resources and tools that facilitate the cocreation of solutions involving researchers, practitioners, and community stakeholders will strengthen researcher capacity for research translation.
Strengthening the capacity for research translation within institutions of higher education requires a holistic approach that addresses several interrelated components.
Infrastructure support. An effective infrastructure is crucial for enabling and sustaining research translation efforts. IHEs should consider establishing dedicated units that focus specifically on supporting research translation activities. These units will build on and expand traditional research enhancement offerings to include, for example, liaison services with community organizations and tools for disseminating research findings effectively. By providing a robust infrastructure, institutions can streamline the process of translating research into practice and make research more accessible to broader audiences. Building institutional capacity also includes a review and alignment of the operational processes, policies, and incentive structures that facilitate (and at times inhibit) research translation activity. IHE leaders have an obligation to provide clear expectations and to implement practices that support those expectations.
Research translation culture. Establishing a culture that supports research translation is crucial. This involves creating an environment where the institution’s values align with the principles of translation. Notably, the values of diversity, equity, inclusivity, accessibility, and belonging can be tangibly advanced through research translation activities. Open access publication, for example, expands and hastens access to new discoveries. If IHEs value research translation and societal impact, then our systems should both support and recognize this essential work. Celebrating successful partnerships and showcasing impactful outcomes can further motivate researchers to engage in this essential work. Effective leadership is vital for guiding research translation initiatives toward a common strategic direction. IHE leaders should clearly communicate the importance of research translation and the role it plays in the university’s mission. By promoting an environment of transparency and open communication, leaders can ensure that all stakeholders understand the institution’s vision for translation efforts.
The needs are pressing, the time is right, and the rewards are substantial. As institutions of higher learning face growing calls to clearly define their value proposition, strengthening research translation can foster understanding and enhance societal impact.
Diana L. Burley
Vice Provost for Research and Innovation
American University
The Trap of Securitizing Science
In recent years, science and technology have emerged as critical domains reshaping the landscape of international relations and national security. In particular, the United States and the European Union have sought to enhance the security of critical research and development and academic research in response to perceived theft and exploitation by the Chinese state and associated companies. More broadly, with the West increasingly concerned by China’s rapid advance as a science and technology powerhouse, policymakers have argued that heightened protection of national research resources is a necessity. We ask whether security constraints may serve to throttle the very asset that builds economic competitiveness in the first place.
This tension is particularly evident in critical and emerging technologies, including artificial intelligence, quantum information technologies, and semiconductors. All three technologies are essential for meeting US national security objectives, defined as protecting the security of the American people, growing the economy, and defending democratic values. With China seen as the main competitor to the West—with the intent and ability to change the international order—Western policy responses have increasingly sought to limit China’s access to critical research resources.
Notably, China-US cooperative research is cited more highly than work published by researchers from either country working alone.
One example is the effort by the United States, Japan, and the Netherlands to restrict Chinese access to advanced chip-manufacturing technology. In order to prevent deliberate theft of these assets—and to prevent so-called unwitting collectors from transferring knowledge illicitly—US officials have stepped up oversight of scientific cooperation with actors in China. Although the US Department of Justice’s China Initiative, started during the Trump administration, has ended, some of its provisions, such as those for vetting foreign researchers, have been further developed and formalized by agencies including the Department of Defense. And in August 2024, the United States allowed the US-China Agreement on Cooperation in Science and Technology to lapse, at least for the time being. The pact had supported some 45 years of exchanges between researchers in the United States and China.
These developments mark a significant departure from recent history. During the first two and a half decades of the post–Cold War era, China and the West engaged avidly in scientific and technological cooperation, transcending geopolitical rivalries. The exchange has been productive for both sides—and for the development of scientific knowledge generally. Chinese researchers, working in the West and within China, greatly enhanced the stock of knowledge. Notably, China-US cooperative research is cited more highly than work published by researchers from either country working alone, according to analysis by one of us (Wagner).
It’s important to protect sensitive technologies. But perhaps too little attention is paid to the tradeoffs between scientific collaboration and its need for openness, its contribution to economic competitiveness, and the demands of security. Openness remains a critical component of a healthy research system. Friction around collaboration and trust may harm US innovation and discovery more than security will help it. What’s more, making research pay off—in the form of economic competitiveness and jobs—requires a different set of policy tools than those pointed at security.
In the 1980s, when the US economy was challenged by a rising Japan, national policymakers responded with changes to shore up the knowledge economy. They dramatically restructured R&D tax credits; overhauled antitrust policy to allow precompetitive research cooperation; and boosted government-industry cooperation with new contracting mechanisms. Industry and government partnered in support of semiconductor research through SEMATECH; patent policy gave universities bold new rights; and several other interventions bolstered economic growth. This time, we argue, the primary policy response to China’s advances has been to reduce engagement around research.
Policymakers should recognize that security alone cannot strengthen Western economic or scientific leadership; international collaboration has become an essential ingredient to scientific and technological advancements.
We believe that legitimate security concerns can be addressed without sacrificing research opennessand the myriad benefits it brings. What is needed is clarification between securitization and international cooperation: as secure as needed, as open as possible. Policymakers should recognize that security alone cannot strengthen Western economic or scientific leadership; international collaboration has become an essential ingredient to scientific and technological advancements. Moreover, security—whatever its real or perceived benefits—can be pursued to a point of excess, at which point it erodes democratic institutions and values.
The rise of openness, and of Chinese science and technology
China’s ascent as an important engine of development in science and technology can be traced to the late 1970s, in the wake of the Mao era. Upon becoming China’s paramount leader in 1978, Deng Xiaoping adopted the Four Modernizations program, which prioritized development in agriculture, industry, national defense, and science and technology. At the same time, Western powers interested in improving relations with China and in nudging Beijing toward a more liberal political system viewed science and technology as low-risk cooperation. Some of these powers, including the United States, eagerly promoted openness generally—and exchange with Chinese scientists in particular.
Although official actions helped create ties between Western nations and China, they also owed much to the informal efforts of scientists taking advantage of the opportunities enabled by the drawdown of the Cold War. The dissolution of the Soviet Union, the reunification of Germany, and the increasing integration of Europe freed up a generation of scientists to work together. The resulting self-organized networks of researchers were driven by powerful norms of openness and reciprocity.
As Chinese researchers’ capacity grew, they became integral parts of the global scientific community.
The global network that grew out of these emergent relationships proved to be constructive and attractive: there is a high correlation between a researcher’s prestige and the likelihood that the researcher works at the international level. By the 2020s, international collaborations grew faster than national ones for all the large Western powers. A query of the Web of Science database shows that, in 2022, internationally coauthored papers accounted for as much as 45% of scientific articles, depending on the field.
As Chinese researchers’ capacity grew, they became integral parts of the global scientific community. The effects have been profound, contributing significantly not only to China’s current status as a leader in fields including artificial intelligence and quantum computing, but also to science more generally. As one measure of productivity, the share of scientific papers published by researchers in China rose from less than 2% of the global total in 1990 to 25% in 2023, according to Wagner’s analysis. Authors affiliated with Chinese institutions now surpass US counterparts in both quantity and impact of research, particularly in critical technology areas.
Underpinning this output is a vast expansion of China’s educational and research infrastructure. From 1991 to 2018, according to the Organisation for Economic Co-operation and Development (OECD), China’s R&D spending surged from $13.1 billion to $462.6 billion, accounting for nearly a quarter of global R&D investment. In addition, the number of Chinese universities has tripledsince 1990. The OECD reports that in 2020, China produced 3.6 million STEM graduates, compared to 820,000 in the United States. A nationwide network of government-funded laboratories complements Chinese industrial research efforts. It’s no wonder that, by 2022, nearly half of global patent filings came from China.
Researchers in China joined a global network of collaborators working together, prompted by the demands of their subject matter and the extra attention gained from global connections. This network grew organically at exponential rates in the years after the Cold War, as knowledge creation became unfettered from political constraints. Greater efficiencies emerged. The rules of engagement for what Wagner has described as a “new invisible college” were established by the researchers themselves—no global ministry of science set the conditions. Elite researchers sought one another out as collaborators (see, for example, the Noble Prize–winning work of French microbiologist Emmanuelle Charpentier and American biochemist Jennifer Doudna on CRISPR). The network was stunningly vibrant and productive.
Increasing competition
The rapid advancement of China as it joined, benefitted from, contributed to, and exploited the global scientific and technological network now carries profound implications for global power dynamics. As history has shown, technological leadership shapes military capabilities, drives economic strength, and ultimately determines a nation’s position in the international system. China’s superiority threatens what scholars Jessica Weiss and Jeremy Wallace call the “liberal international order,” which has been underpinned by Western—particularly, American—technological supremacy and the liberal values enabling it. China, in contrast, has developed energetic science and technology sectors on the basis of an autocratic government forming strong interconnections among the state, business, and science, in ways considered anticompetitive in the West.
The rapid advancement of China as it joined, benefitted from, contributed to, and exploited the global scientific and technological network now carries profound implications for global power dynamics.
US policies on research security started to intensify around the start of the China Initiative during the Trump administration. The initiative had little success in prosecuting alleged spies infiltrating US universities, corporations, and laboratories. Although the Biden administration formally abandoned the initiative, it has continued to push for increasing federal oversight of funding institutions and researchers. In January 2022, the White House issued guidelines requiring that researchers enhance their security practices in order to be eligible for government funding. Federal regulators are focused on several areas of control, including standardizing disclosure requirements for US researchers collaborating internationally, vetting collaborators, developing consequences for violation of the requirements, and imposing information-sharing rules.
European policymakers’ attitudes toward China have also hardened in recent years. In 2019, the European Union labeled China a partner, competitor, and systemic rival, indicating an intention to continue collaboration in trade, science, and technology—but also to counter a strengthening China and to reduce European dependencies on the goods and services it provides. The ambition of European policy, as proposed by European Commission president Ursula von der Leyen in 2023, is de-risking. Although it is still unclear what this will look like in practice, its purpose is clearly to enhance Europe’s capacity to supply its own needs and to prevent what von der Leyden called “forced technology or knowledge transfers” to China. In January 2024, the European Commission published a white paper proposing more robust export controls, heightened research security, more research on the dual-use potential of technologies subject to Chinese state appropriation, and enhanced assessments of risks due to outbound investments. In May, European Union member states adopted the European Council’s recommendations for enhanced research security.
Are the measures sound?
Increasingly, government actors in Western countries have reinterpreted scientific activities as security issues. As much as this is bad news for science, it is also not clear that proposed protective measures will meet other national goals, such as economic competitiveness and growth. Furthermore, it is unclear whether these measures can meaningfully address stated security threats or have any useful effect on reducing China’s domestic capacity. The brief evidence regarding these policies’ effects so far suggests that they will fall short of their goals.
As much as this is bad news for science, it is also not clear that proposed protective measures will meet other national goals, such as economic competitiveness and growth.
Consider, for example, the issue of economic competitiveness. This relates to R&D in at least two ways: the first is the ability to come up with new ideas, and the second is use of these ideas to create jobs and economic activity. One of the goals of recent China-focused policies has been to decouple from reliance on China in order to develop internal capacity and jobs. However, the costs of decoupling from China are high. Australia offers a case in point. Australia has lately adopted a hawkish approach toward China, yet 2023 saw a record level of trade between the two countries. Interdependencies forged over three-plus decades of globalization policies cannot just be severed overnight, particularly when consumers depend on the low-cost goods China supplies.
If the goal of securitization is to preserve the ability to come up with new ideas to build economic competitiveness and excellence domestically, then rigid measures designed to protect research institutions will be counterproductive. Excellence requires access to a global pool of talent. As North American and European education systems stagnate, the demand for foreign-born STEM talent will only increase. Far from being assured by securitization, European and North American competitiveness will more likely erode without international collaboration and researcher mobility.
The same is true of scientific productivity, which will be hard to sustain amid reduced collaboration or enhanced scrutiny. Since 2018, the number of US-Chinese copublications have been falling across a variety of research areas. Surveillance of researchers and institutions has led to broader declines in international collaboration, not just collaboration with China. US-European copublication has also dropped, illustrating the difficulty in surgically eliminating “undesired” exchanges.
If the goal of securitization is to maintain a thriving democracy, then current measures are similarly counterproductive.
Furthermore, if the goal of securitization is to maintain a thriving democracy, then current measures are similarly counterproductive. Policies for greater security may undermine democratic institutions. There is already evidence that scientists of Asian and Chinese descent felt that the China Initiative exposed them to xenophobia, racism, and ethnic profiling in scientific institutions. Not only does this undermine liberal values and unfairly target researchers, it also limits access to talent and stifles free exchange and openness, all of which are likely to sabotage competitiveness rather than bolster it.
Finally, another way that securitization risks undermining democratic institutions is through reflexive control—defined as “a means of conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action.” The China Initiative exemplifies this problematic strategy: established with the explicit goal of preventing economic espionage detrimental to national security, the program didn’t designate specific actions or sectors as problematic but ended up overwhelmingly targeting people of Chinese backgrounds. It should be unsurprising, then, that the effort produced few national security–related charges or convictions, instead producing allegations of misconduct such as animal smuggling, hacking of noncritical systems, and improper research methods. Those indictments that were brought numbered about 160, according to a study published in MITTechnology Review. The recent legislative effort to “develop an enforcement strategy concerning nontraditional collectors, including researchers in labs, universities, and the defense industrial base, that are being used to transfer technology contrary to United States interests,” if successful, is likely to augment the harmful effects of the original policy.
A better way forward
In the rush to securitize national science and technology systems, are policymakers working at cross-purposes? As political scientist Graham Allison has argued, today’s China-US tensions resemble what he calls the Thucydides trap: when a rising power threatens the supremacy of an incumbent, the resulting fear and overreaction can significantly raise the risk of confrontation. By extension, if securitization potentially undermines peace, it certainly undermines the capacity of national science and technology systems to advance the state of the art and the state of knowledge.
Perhaps because openness is mostly a product of everyday activities that require no state action, it is easy to forget that it has underpinned so much scientific development. Then too, the calculations of security agencies, by their nature and political mandate, tend to emphasize threats of foreign engagement and overlook the benefits. These agencies may also claim access to classified information, making it hard for university administrators, politicians, and the press to scrutinize their analyses. But it is even harder for security agencies to assess the impact of security measures on research productivity and the strength of Western science and technology systems.
This is not to say that researchers should ignore security concerns. To be effectively avoided, real security issues must be carefully defined. However, evidence from Swedish-Chinese research collaborations suggests that the challenges attending cross-border collaborations are typically in the gray areas of discretionary responsibilities, not legal compliance.
Security measures must be aligned with ways in which science and technology thrive: an emergent system created by links among researchers. Knowledge is openly shared and often cocreated.
We hope that two new initiatives at the National Science Foundation (NSF) can help develop a better sense of which activities and conditions are truly problematic. Safeguarding the integrity and security of US research while keeping fundamental research open and collaborative is a goal of NSF’s Safeguarding the Entire Community of the US Research Ecosystem, or SECURE, program. NSF’s Research on Research Security Program aims to explore the challenges and identify critical areas of concern. The European Commission is aiming to create a similar initiative to develop a center of expertise on research security. These activities can help close the gap by offering clear boundaries and defining genuine risks and responses appropriate to research actors and environments.
To accomplish their goal of balancing security concerns against the benefits of openness, these centers must have political independence so that their analysts are able to carefully consider the tradeoffs between security and knowledge creation.
At the most basic level, security measures must be aligned with ways in which science and technology thrive: an emergent system created by links among researchers. Knowledge is openly shared and often cocreated, flowing readily to those who can absorb it. Absorbing new knowledge is usually fair game: it is neither illegal nor inappropriate, and all parties can use similar strategies to increase their capacity to absorb scientific information and scientific thought. Securing specific parts of the research system is only one part of the transformation that is needed to become smarter, faster, and more efficient at putting knowledge to use.
In other words, to retain thriving research and innovation systems, democratic states must learn to live with the advantages and disadvantages of openness. With this in mind, governments must clearly communicate what law enforcement, the intelligence community, and the research sector should expect. And, as they have in the past, governments should adopt a suite of deliberate policy strategies to achieve economic security, jobs, and competitiveness. Strategic ambiguity may have its uses in international relations, but science, dependent as it is on emergent networks of trust, cannot thrive when scientists and the conduct of science itself are objects of suspicion.
Will It Scale?
During the bloodiest battles of the First World War, a young French sergeant named André Maginot was injured in the fighting, earning him a medal for valor. He later rose to the position of French minister of war and became famous for building a series of defensive fortifications along the Franco-German border. The design of the Maginot Line, as it came to be known, was not the result of abstract musings. Instead, it was deeply informed by Maginot’s own experience—particularly his assessments about optimal military tactics in the fierce, close-quarters combat of the Great War.
But then the Second World War came, and the Maginot Line failed when Nazi Germany’s military machine circumvented the fortifications by invading France through Belgium. The skirmishes Maginot witnessed along the Western Front from 1914 to 1918 constituted a small-scale proof-of-concept, convincing him that extending fortifications along the entire Franco-German border would deter invasion. Instead, the strategy ended up being a deadly manifestation of what social scientists refer to as the scaling problem: the efficacy estimated from small or pilot programs shrinks or evaporates when programs are expanded. The static defenses of the Maginot Line were effective for narrowly delimited battlefronts, where the enemy had the option of either going forward or retreating, but they failed at a larger scale, where the invading army could just choose another path.
A twenty-first-century team of civil servants and social scientists should lead with experiments that anticipate likely causes of failure at scale, even if doing so requires more time, effort, and resources initially.
To this day, orthodox scientific methods remain aligned with the ideas about scaling that animated Maginot’s failed fortifications. This is exemplified by the biomedical trials used to evaluate new pharmaceuticals. Ideas are first tested in a restricted environment, such as a petri dish; or in Maginot’s case, various battles in the mud-soaked fields of the Western Front. If it works in the petri dish, that is taken as a signal to scale it up systematically. We refer to this approach—testing no intervention versus testing an intervention under a limited (usually the best possible) scenario—as A/B testing.
The A/B testing approach invites promising early results that are unlikely to be realized in a larger setting. We argue that within the social sciences, a fundamentally different approach is needed; we call it option C thinking. Put simply, a twenty-first-century team of civil servants and social scientists should lead with experiments that anticipate likely causes of failure at scale, even if doing so requires more time, effort, and resources initially.
Scale-up letdowns
Statistical flukes, well-intentioned errors, cognitive biases, other oversights, and even willful deceit readily boost estimates of effectiveness at the proof-of-concept stage. This produces seductive, unreliable evidence that can lead decisionmakers astray, particularly if they are seeking out research for new solutions to old problems.
The actual conditions under which an intervention was implemented differed from the idealized form in which it was tested.
Scaling ineffective programs can waste money, time, and cause people harm by blocking more promising alternatives. Consider the Drug Abuse Resistance Education (D.A.R.E) program, which built on social inoculation theory and aimed to inoculate kids against the temptation of drugs. Early on, a study in Honolulu that found D.A.R.E to be effective estimated there was a 2% chance the researchers’ data could yield a false positive, leading the government to scale up the program. Subsequent studies found that the program did not work. The researchers either made a statistical error or the result fell within that 2%. Another example is a small-scale experiment in Seattle, in which one of us (List) and colleagues found that Uber users who got a $5-off coupon took more rides than those who did not receive the coupons, and the increased earnings from those rides offset the effect of the discount. But when the initiative was scaled up to a larger group of Seattle riders, the shortage of drivers resulted in higher fares and wait times, which led to an overall decrease in demand.
In both cases, the actual conditions under which an intervention was implemented differed from the idealized form in which it was tested. Considering that difference is the crux of option C thinking, and this requires a thought process we’ve called “backward inducting from reality.” A relevant example comes from a preschool List and colleagues started in Chicago in order to identify programs that could decrease the achievement gap—the Chicago Heights Early Childhood Center. A typical A/B test would recruit stellar teachers and compare learning in our program versus a traditional one. But we realized that, at a scale of thousands of schools, not every teacher could be exceptional. So we designed our study to examine whether our curriculum would work with teachers of varying abilities. We recruited teachers who would typically come and work in a school district like Chicago Heights. This choice provided the A/B efficacy test because we had several stellar teachers, but by populating option C as well, we ensured that the situation was representative—at least on the dimension of teacher quality.
For another example of option C thinking, consider how American company Opower, alongside Honeywell, implemented a new smart thermostat with great energy-saving promise. It would modify the temperature when occupants weren’t home and reduce costs by turning off during peak hours. However, when taken to scale, the benefits failed to come to fruition. A team (including List) came in to determine what went wrong. It turned out most customers undid all the power saving settings. With option C thinking, the engineers developing the technology would have tested it using not only the optimal settings, but also trying the ones actually chosen by the end user. If this had been done, the engineers could have considered ways to encourage people to use the energy conserving settings before taking their product to scale.
Causes of “voltage drop”
Scaling failures are often dismissed with a handwave or a shrug, as if what happened was unforeseeable. However, we have found such failures fall into five general causes. In a 2022 book, The Voltage Effect, List linked the idea of scaling failures to the metaphor of voltage drops. Just as voltage along an electrical cable decreases with distance from the source, the effectiveness of a tested intervention will fall with its distance from its “ideal” real-world implementation.
The first reason for failure to scale is the false positive, or a statistical fluke in the original research. Especially for small-scale experiments, the probabilistic checks used to conclude that a program works are not foolproof; the possibility that a positive result was a lucky but ultimately misleading inference can never be ruled out.
Scaling failures are often dismissed with a handwave or a shrug, as if what happened was unforeseeable.
The second and third causes relate to epistemological representativeness. Sometimes, the population used in a positive trial looks very different from the general population that the program will be rolled out to. For example, when testing energy efficiency programs, early adopters are often those most excited about conserving energy and therefore most responsive to the intervention. However, the population at large is far more likely to contain disinterested and obstinate energy consumers. Health and education programs have failed for similar reasons, including not accounting for varying needs by age, wealth, and other demographics. The other epistemological mismatch occurs when the situation or context during the trial is unrepresentative, as in the case of the Maginot Line failing to account for the possibility of circuventing the fortifications. Another example includes COVID-19 vaccination messaging that failed to incentivize people who were reluctant to receive the new vaccines.
The fourth cause of failure is the effect of negative spillovers, which describes what happens when a program has a positive effect on those enrolled, but a negative one on the unenrolled that is imperceptible in small samples but shows up when the program is expanded. For example, in one test, increasing Uber drivers’ salaries by raising fares led to more hours worked. However, at scale, the benefits were muted because some of the extra hours included greater efforts to steal passengers from fellow Uber drivers, rather than more time transporting paying passengers.
A fifth cause of the scaling problem can be attributed to supply-side factors, reflecting the fact that the unit costs of small-scale experiments can be misleadingly low. For example, Saudi Arabia has considered plans to introduce Chinese language lessons to all children at school. Hypothetically, a small trial would be relatively inexpensive because it would employ a few local teachers already qualified in the language. However, at scale, costs may rise sharply because the government would need to greatly expand the supply of instructors, which would likely mean paying professionals to relocate to Saudi Arabia.
Underplaying scaling problems
Unfortunately, current research infrastructure and incentives contribute to the scaling problem. Overly eager—or intentionally corrupt—scholars and program officers may artificially inflate the results of small-scale trials and so increase the likelihood a program expands—and then fails.
When drawing on research to design public programs, decisionmakers should take their cue from the scientific community and find approaches to ensure findings are reliable.
One way scientists may inflate results is by rerunning experiments (or analyses) until they get lucky, and presenting that lucky trial as their sole attempt. Alternatively, they can handpick the participants in their trial to maximize the effect of the program, exploiting their knowledge of who is most likely to benefit. They might overwork their PhD students and artificially decrease program delivery costs. The possibilities are endless, but the consequences can be devastating for the budgets and credibility of government entities that roll out ineffective programs. Civil servants need to be wary of scientific doping.
Fortunately, countermeasures exist to combat these practices, whether they are caused by an overly eager, incompetent, or deceitful scientists. For example, the scholarly community has started to demand replication of certain scientific findings by independent teams of scientists before results are afforded credibility. Increasingly, researchers are required to post all results and data, which can reveal cherry-picking or otherwise biased analyses. When drawing on research to design public programs, decisionmakers should take their cue from the scientific community and find approaches to ensure findings are reliable.
Countering scale failures by testing bigger
In our opinion, option C thinking enhances reliability in the social sciences. It effectively means asking—before research begins—why an idea would fail at scale. In our experience, such reasons are not difficult to anticipate if one has the discipline to face the question.
Leveraging option C thinking may begin with a new treatment arm, such as testing a smart thermostat with the end user. Ultimately, option C thinking is a mindset that augments the traditional A/B approach by proactively producing the type of policy-based evidence that the science of scaling demands. In a nutshell, it means starting with a recognition of the big picture and anticipating what information is needed to scale up. The goal is to gather the evidence that provides greater scaling confidence from the initial design.
As economists like to say, there is no such thing as a free lunch: adopting an option C approach brings significant risks and downsides that decisionmakers must bear in mind. A key virtue of A/B testing is that an individual trial is cheap, even though the cost of mistakenly scaling an intervention could be catastrophic. Implementing option C thinking builds that consideration into the equation before the (overly encouraging) results come in. It also accommodates instances where executing a trial will have large fixed costs, or where benefits might only be apparent at a large scale.
The higher cost of option C testing brings another risk that is easy to overlook. A decisionmaker entertaining proposals for research or interventions will naturally be biased toward submissions from famous professors working at elite scientific institutions. The greater the proposed cost, the more acute this bias will be. Smaller entities, with their more limited means, struggle to submit credible bids, and policymakers look to hide behind the safety of a big name lest the experiment turns out to be a total failure. Thus, there is the possibility that option C thinking reinforces existing inequities in science, and pushes the process of knowledge production closer to a winner-takes-all format.
Option C thinking should be part of the toolkit belonging to any scholar hoping to influence policy, and any decisionmaker involved in program implementation.
There are no straightforward solutions to these valid concerns. However, just sticking to A/B testing is far from ideal. Option C thinking should be part of the toolkit belonging to any scholar hoping to influence policy, and any decisionmaker involved in program implementation. When government officials are considering new programs, they must resist the natural urge to fixate on small-scale testing as a first stop for ideas, and instead be open to the benefits of starting with a large-scale experiment that captures more of the big picture. We’d also like to see the social sciences adapt, so that researchers reflexively design studies to consider the most likely causes of failure in advance, priming public programs for success rather than failure.
While option C testing requires going big, there are some small steps that government officials can take to support it. The simplest is to explicitly require evaluators to consider the five causes of voltage drop, which will bolster research that anticipates stumbling blocks. This might include working within the scientific infrastructure on how grants are evaluated, contracts awarded, and publications assessed. Another step is to select a handful of priority areas that can particularly benefit from option C thinking and provide sufficient resources, ideally while also introducing mechanisms to ensure a diverse group of researchers is recruited.
We will never know what might have happened on May 10, 1940, when Hitler commanded his forces to outflank the Maginot Line, had Maginot previously been exposed to the modern epistemological analysis of the scaling problem. However, by adopting option C thinking, decisionmakers give themselves the best chance of conceiving of and rolling out effective programs at a time when resources are scarce and the public’s trust in government is worryingly low.
A Cutting-Edge Bureaucracy
The word “bureaucracy” conjures up images of red tape and long lines at the DMV, not cutting-edge innovation. But some of the most significant scientific and health innovations of the past century have actually come from scientist-bureaucrats at government research institutes.
On this episode, host Jason Lloyd is joined by Natalie Aviles, an assistant professor at the University of Virginia and author of An Ungovernable Foe: Science and Policy Innovation in the US National Cancer Institute. Aviles explains what the National Cancer Institute does and how the mission and culture of the agency have enabled its scientist-bureaucrats to conduct pioneering cancer research, such as the invention of the human papillomavirus, or HPV, vaccine.
What do you think when you hear the word bureaucracy? For many, it’s the image of annoying red tape and stodgy paper pushers. Because I have a four-year-old son, I picture the DMV run by sloths in the Disney movie Zootopia. But some of the most significant scientific and health innovations of the past century have actually come from scientist bureaucrats at government research institutes.
I’m Jason Lloyd, managing editor of Issues. I’m joined by Natalie Aviles, an assistant professor at the University of Virginia and author of the recent book, An Ungovernable Foe: Science and Policy Innovation in the US National Cancer Institute. She adapted a chapter of the book for Issues in an article about the NCI and its role in developing the vaccine for human papillomavirus or HPV. Natalie talks to us about what the National Cancer Institute does and how the mission and culture of the NCI have enabled its scientists bureaucrats to perform cutting edge cancer research. Natalie, welcome. I’m delighted to talk to you about your research and the little known history of the National Cancer Institute.
NatalieAviles: Yes, thank you for having me.
Lloyd: I thought a good place to start would be if you could talk about what the National Cancer Institute is and how you became interested in studying it.
I was interested in telling a much more straightforward history of viral cancers, but this one agency kept popping up over and over and over again.
Aviles: So the National Cancer Institute is the largest agency within the National Institutes of Health, and it is an agency whose mission is specifically to both do research related to cancer and to try to work to improve the health of the American population related to cancer. The way that I became interested in studying the National Cancer Institute is that I initially was interested in studying viruses. I’ve always been really interested in viruses and I chose as an undergraduate to become a sociologist, but I retained this interest in biomedicine and biology and how we can think very humanistically about our relationship to things like viruses. So, I became interested in cancers that were caused by viruses.
I was interested in telling a much more straightforward history of viral cancers, but this one agency kept popping up over and over and over again as I was doing this research on the oncologists and on the basic researchers who were doing the most work on viral cancers. That was the National Cancer Institute. So, being a sociologist, I am also very interested in how we organize science. So, what I ended up doing was taking a little bit of a deeper dive into, “Well, what is the National Cancer Institute? Why are people so focused on viral cancers and what’s the history of this organization that, in fact, does a lot of things in biomedicine in general?”
It helps distribute grants to the tune of billions of dollars in taxpayer money. It also has this intramural or in-house research program where people who are employees of the federal government conduct their research in government labs. Yet there’s very little that’s actually been directly written about this organization. So, this ended up becoming the question, and I maintain this focus on viruses and viral cancers and vaccine development because I am personally still very interested in that. I think it’s a fascinating topic, but I really am writing about this organization and what this organization is doing as an organization that has this very important position within the entire research ecosystem of the United States.
Lloyd: You mentioned in your book that the NCI’s mission has shaped some of the most significant innovations in both cancer research policy and public health-oriented breakthroughs of the past 70 years. So, I don’t want you to summarize your entire book, but could you just talk about some of those breakthroughs and innovations and how they came about it at NCI?
Aviles: One of the most interesting histories within the National Cancer Institute that’s been told is this really interesting history of what seems to be failure turns out to plant the seeds for future success. So, in the 1960s, there was this very big push within the institute to demonstrate that cancers are caused by viruses. This was a hypothesis that a lot of researchers who worked at the National Cancer Institute were very invested in and early on they identified retroviruses as potential candidates for pretty much universal viral causes of human cancers. A lot of money was put into this. It was seen as being a NASA-like moonshot-style project for trying to develop these universal vaccines to prevent cancer.
It was seen as being a NASA-like moonshot-style project for trying to develop these universal vaccines to prevent cancer.
It turned out that they were not correct about their basic hypothesis that retroviruses cause cancer, but they were actually close. They were just looking at the wrong thing. It turned out the laboratory of Harold Varmus and J. Michael Bishop, and they ended up demonstrating that it was different genes inside of human cells that are responsible for causing cancer. But most of their research was actually funded by the National Cancer Institute, and it was actually in an effort to try to prove this hypothesis about retroviruses causing cancer that they proved the opposite, right? They noticed these aberrations and then they tested it and they realized that these National Cancer Institute scientists had it backwards, but they were on the right track.
So, that was a big failure at the time in the 1970s and the immediate aftermath of the discovery of these cellular oncogenes as they were called, the discovery of these genes inside of cells that are actually responsible for causing human cancer. A lot of people saw this as a very embarrassing hit against the National Cancer Institute. Fast forward though, just a few years and another group of scientists working in the National Cancer Institute led by Robert Gallo discovered the first retrovirus that actually does cause human cancer. It was Robert Gallo’s laboratory that had this very focused expertise on retroviruses that ended up being incredibly central to proving that HIV is the viral cause of AIDS.
So, it was Gallo’s work along with the work of a few other people like Luc Montagnier in France that contributed to our understanding of HIV as the viral cause of AIDS. So, this is a very long-running investment that the National Cancer Institute has made. They ended up doing a lot of the work to develop some of the first antivirals, antiretrovirals that were effective against HIV. Those efforts were led by a lab PI-ed by Samuel Broder who was in the clinical oncology program in the National Cancer Institute. He subsequently became the director of the National Cancer Institute.
At the time that he was director, a group of National Cancer Institute scientists led by Douglas Lowy and John Schiller discovered the enabling technology for a vaccine against another cancer causing virus, this time a DNA virus called human papillomavirus. This is a virus that’s responsible for the overwhelming majority of cervical cancers as well as it turns out a lot of other anti-genital cancers and increasingly many head and neck cancers. So, this is a fairly ubiquitous virus in the human population globally, and it’s one where we have developed a lot of very effective vaccines for preventing these viral cancer related deaths in no small part due to the efforts of these National Cancer Institute scientists.
Lloyd: So these are primarily the intramural scientists that you’re talking about.
It really helps us understand how knowledgeable experts who are employed as civil servants in the US federal government play a really crucial role in helping to ensure that the policies that we develop are not just scientifically sound, but also accountable to the public.
Aviles: Yes. So, the book is really an exploration of the intramural laboratories in the National Cancer Institute. This is a small part of the NCI’s budget. At different parts of history, it’s been between 10 and 18% of the Institute’s entire budget. So, it’s not what most people think about when they think about the National Cancer Institute. When most people think about the National Cancer Institute or the NIH that it’s part of, they tend to think about extramural grants. So, these are grants that get awarded to academic scientists and scientists working in research institutes. So, the intramural program has largely been neglected when we talk about policy.
One of the things that I’m able to demonstrate in the book is that actually it plays a really special role in shaping policy and one that we ought to look at because it really helps us understand how knowledgeable experts who are employed as civil servants in the US federal government play a really crucial role in helping to ensure that the policies that we develop are not just scientifically sound, but also accountable to the public.
Lloyd: Could you describe that environment of the intramural scientists a little bit? Are they all on a campus together? Are they all in Bethesda? I actually don’t even know where they’re located.
Aviles: Yes. So, the main campus of the entire NIH is located in Bethesda, and there’s something that’s a little bit special about the intramural program in the National Institutes of Health generally, which is that this is a place where a lot of expert scientists with really great reputations are concentrated together in a way that we actually don’t often even see in universities. One of the things that intramural scientists talk about that they think makes the National Institutes of Health and the National Cancer Institute special is that they really are at these agencies in order to do science and sometimes to be involved in this policymaking process and like the basic bureaucratic administrative tasks of making sure that science policy works. But they’re not teaching.
Unlike university scientists, they don’t have to teach. Unlike scientists working for commercial firms or in private industry, they aren’t accountable to the same market-focused managerial impulses. So, they really feel like they are able to do science and they’re able to do really high-risk, high-impact science because the mission of these agencies is also very big, right? The mission is really make a big impact on the health of the populace. So, one of the things that many people who are part of the intramural program throughout the National Institutes of Health will say is that this is a really special environment for them to work in. It allows them to take these risks that end up paying off in very big ways sometimes.
Lloyd: Let’s talk about one of those risks. One of the most interesting aspects of the story that you tell in your Issues piece is the development of the HPV vaccine. I was hoping you could tell us what lessons the folks at NCI, specifically Samuel Broder and then you talk about John Schiller and Douglas Lowy, what they learned from their experience with the development of HIV/AIDS drugs that they then applied to the development of the HPV vaccine.
Aviles: This is a really fascinating story and one that I think very powerfully demonstrates how the scientific missions and the bureaucratic missions of the National Cancer Institute can work together to create a very distinct science. The story is that Samuel Broder was part of this—what he considered—very crack team of interdisciplinary scientists at the National Cancer Institute. It was assembled among people who worked in the intramural program in the National Cancer Institute because very soon after cases of AIDS were first being reported roughly in 1982, people started to realize that this had the potential to be a very serious epidemic.
They also realized that the National Cancer Institute had spent so much time and invested so much money and developed so much expertise in viruses and in immunology that they were uniquely positioned to really address this public health crisis. So, it was working with this laboratory scientist, Robert Gallo, that Samuel Broder developed this idea that if you can get people in the lab together with people in the clinic, you can orient yourself towards this mission to make sure that these “basic science” developments are actually going to deliver something very useful in a short time span when it comes to either clinical outcomes or public health crises of the nature that people were dealing with when it came to the HIV/AIDS crisis.
It was that experience of going bench to bedside that Samuel Broder really was motivated by when he became director of the National Cancer Institute.
So, it was these lessons of working with these laboratory scientists who were trying to figure out and identify what’s the agent that’s actually causing AIDS and then immediately trying to take that knowledge of here is the virus and here are the pathways and look for antiretroviral agents that would have some immediate clinical effect, so that they could just start getting this crisis under control. It was that experience of going bench to bedside that Samuel Broder really was motivated by when he became director of the National Cancer Institute. There’s this very intuitive metaphor of translational research, basically taking what’s going on at the laboratory bench and translating it, making it make sense for clinical practice.
This really became a policy paradigm starting in the 1990s, and the National Cancer Institute had a huge role to play in that. Samuel Broder committed very strongly to this idea of translational research, and it was the work that he did working with laboratory scientists trying to find agents that would help to combat this disease very rapidly that inspired him and his vision of what translational research can look like.
So, when he came into the directorship of the National Cancer Institute, one of the things that he decided from his position as NCI director is that the institute had focused a lot of its investments in extramural research on cultivating basic research, which is very valuable, but that there was this whole other dimension of making sure that that basic research could actually be incorporated into medical practice. So, what he did is say, “We need to develop some funding instruments to make sure that we are actually putting some money into strategically trying to get some of these new and exciting developments at the laboratory bench to have some impact as quickly as possible.”
So he developed all of these novel funding instruments, and sometimes it was taking an existing funding instrument like the P01, which is a project grant and saying, “We’re going to make sure that these are really interdisciplinary, that they have a clinical focus, and that they’re translational, that they’re doing what we say needs to be done here to make sure that our investments in basic research are actually having that impact that the agency is supposed to be accountable for when it comes to human health.”
Then he spearheaded the development of this other more centers-based program, which is called SPORE, the Specialized Programs of Research Excellence, which tended to focus on specific kinds of cancer or specific organ sites and actually get these interdisciplinary teams together. So, that they could take these cutting-edge findings and then actually try to figure out what use they’re going to have clinically. So, this was the first way that he really took his experiences, doing that interdisciplinary AIDS research, and applied it to our funding policies and our funding practices at the national level to try to ensure that the research that we’re supporting with taxpayer funds is actually making an investment in the most promising basic research, but then also having some application to human health and population health.
Lloyd: He set up these novel funding mechanisms and had that translational goal in mind because he saw some shortcomings in the pharmaceutical market space that he was looking to make up for, or what was the motivation that he saw to doing this?
Aviles: Yes, Broder had a very distinct experience as someone who was trying to develop a clinical armament to try to combat the AIDS epidemic. Broder, because of his very close ties with laboratory scientists, had already identified this class of antiretrovirals called nucleoside analogues as the most likely class of drugs that was going to be effective in combating HIV. So, he went to a series of private companies and asked them if they would be willing to partner by sending their basically pharmaceutical catalog to them so that they could rapidly test all of the nucleoside analogues that had been developed so far.
They felt they needed to build in greater safeguards to ensure that any kinds of drugs that were developed by the federal government using taxpayer money would actually be more accessible to taxpayers.
So, one of these companies was Burroughs Wellcome, and they sent a series of compounds for Broder and some of his collaborators at Duke to test. Broder very quickly identified AZT as a promising compound to combat HIV/AIDS and rapidly was able to move into clinical trials. His vision though at the National Cancer Institute was that private companies were going to be true partners. They were going to help run the clinical trials and help with the testing. Burroughs Wellcome actually didn’t do that. It backed out of the clinical testing very early.
So, what ended up happening is that the National Cancer Institute was responsible for most of the early screening testing in both animals and humans that demonstrated the efficacy of AZT as an anti-AIDS drug. But unbeknownst to Broder and his team at the National Cancer Institute, Burroughs Wellcome had actually applied for a patent for AZT in the United Kingdom. So, they were able to essentially claim full credit for the development of AZT as an anti-AIDS drug, even though it was this team of National Cancer Institute scientists who had done most of the work. This really upset Broder, but it upset him most of all because when AZT hit the market as a branded medication, it was astronomically expensive. It was $10,000 a year. Given the concern for trying to stem the tide of this very significant public health crisis related to HIV/AIDS, Broder was very upset by the price tag. A lot of activists and other scientists were very upset.
They felt they needed to build in greater safeguards to ensure that any kinds of drugs that were developed by the federal government using taxpayer money would actually be more accessible to taxpayers who are suffering from these diseases. So, when Broder became director, he was very instrumental in making some changes at the level of the entire National Institutes of Health to try to competitively license any new drugs or technologies that were developed by intramural scientists using taxpayer funding.
So, the approach to competitive licensing that he used is to ensure that any drug or any new technology developed by employees of the National Institutes of Health has to be licensed to at least two different private companies, so that they would compete with one another when a branded medicine based on this patent hit the market. The idea was that this would drive the cost of medications down, and so we would realize taxpayer savings this way. So, he actually was able to change the patenting and licensing practices in the National Institutes of Health to reflect this.
Lloyd: So could you tell me then how these new practices were applied to the development of the HPV vaccine? Am I right in thinking that partnering with pharmaceutical companies is still necessary because the NCI can’t manufacture drugs at scale, or I guess, how did the NCI maintain some control over drug development while still partnering with industry?
Whenever any of these inventions actually happen within the federal government, there still is this relationship of dependency to private industry.
Aviles: Yes. So, this is a really important thing just to keep in mind for understanding how innovation works in the US. The way the innovation ecosystem has been developed is that private industry has always really had the native productive capacity for turning any new developments in basic science into commercially scaled drugs or technologies. So, even people who are employed within the federal government, if they want to see their innovations or their inventions become products that are useful to people, they really depend on private industry to develop these new interventions to scale and distribute them through the market, which is how we tend to distribute healthcare in the United States.
So, whenever any of these inventions actually happen within the federal government, there still is this relationship of dependency to private industry. They need someone to actually develop these new technologies to scale. In the case of National Cancer Institute scientists, they are specifically motivated by the idea that they can eliminate cancer by preventing it very quickly and very easily through these safe and effective vaccines. So, their interest is to get as many vaccines as possible into as many bodies as possible around the world, so that they can eliminate cancer. They recognize that this objective is intention with the objectives of private industry, which are to make profitable drugs.
So, a lot of these policy decisions are playing in this very interesting space where they have to balance the concerns of government scientists, which are oriented towards doing the maximum to realize the public good that it’s possible for them to do, and then the concerns of companies that want to be able to profit. So, one of the interesting things about the National Cancer Institute, just to bring this back to the high risk research, when I talked Douglas Lowy and John Schiller who I interviewed for this book, they pointed out that neither of them really had that much experience in things like vaccinology or vaccine development. This was pretty new for them.
So, this is one of these amazing stories where they actually weren’t necessarily heading out to create a vaccine against the human papillomavirus, but they saw a scientific opportunity. They were driven by curiosity. Then once they realized, “Oh, we have a vaccine,” they then were able to realize like, “Oh, this could be a huge intervention. It could be one of the most significant things we can do to actually improve women’s health around the globe,” because vaccines are very effective and a vaccine against cancer is a very big deal. They developed this vaccine in the climate of a National Cancer Institute that had just changed its patenting and licensing practices to reflect the competitive co-licensing that Samuel Broder had developed.
So, when they licensed this technology, they co-licensed it to two companies. Merck was one. Another company was called MedImmune. They shortly after transferred their interest to GlaxoSmithKline. So, these are two very large global pharmaceutical companies that have pretty good vaccine development portfolios, and they were able to compete with one another to try to get this human papilloma vaccine on the market. What Schiller and Lowy from their position at the National Cancer Institute realized is that, well, this could be a very effective global health technology.
They were familiar with some of their colleagues in epidemiology who were conducting a study on the natural history of human papillomavirus in Costa Rica, which is a Central American country where there’s a large rural population. There’s not as robust healthcare infrastructure. So, it looks a lot more like developing nations where most of the cervical cancer morbidity and mortality actually happens. So, they joined this study and created a new phase two/three human trial arm. It was in the process of doing this that they decided that they were going to follow up on some of the women in this trial who had dropped out for one reason or another.
They were certainly driven by this desire to have the greatest possible impact on public health, whether it was profitable or not.
The original protocol for this vaccine was to administer the vaccine in three different doses over the course of 9 to 12 months. This is an expensive vaccine because it has to be refrigerated. So, it needs total cold chain storage from beginning to end, and that makes it a really difficult technology to administer in low resource environments where there’s not a lot of infrastructure. So, following up with women who had only received one or two doses of this vaccine, they realized that that was enough to create a sufficient immune response to protect women against human papillomavirus.
It was because they wanted to follow up with these women, assuming that private pharmaceutical companies would not do this because it’s not really in their interest to spend more money following people who essentially have dropped out of the trials. They followed these women who dropped out of the trials to see if fewer doses would actually be just as effective. When you think about the priorities of private companies saying, “let’s do fewer doses,” is not really in alignment with one’s interest in maximizing profit. So, they were certainly driven by this desire to have the greatest possible impact on public health, whether it was profitable or not.
The really amazing outcome of this is that they were able to demonstrate that two shots and even one shot of the human papillomavirus vaccine was sufficient to prevent over 90% of cervical cancers and other HPV-related cancers. The World Health Organization now recommends one shot as an effective public health intervention. So, it was the fact that these National Cancer Institute scientists were really oriented towards trying to make the biggest possible impact on public health without regard to national borders, without regard to whether this was going to be a profitable technology or not.
They were able to realize that they could have a much more effective global health campaign if they used fewer doses of this vaccine that they helped develop. So, that’s a really interesting story where you can see the difference that it makes for scientists to be in an agency where their mission is really to, on the one hand, do the best science that they can, but then on the other hand, make the biggest impact on public health possible. So, being driven by that dual mission of the National Cancer Institute where they work was really instrumental in shaping this technology.
Lloyd: There seems to be a lot of policy implications for the innovations that you’re describing happening at the National Cancer Institute, and I want to talk about them in two ways. So, one, maybe it’s the mission, maybe it’s the structure or the organization itself, but what about the NCI is maybe applicable to other federal agencies trying to do innovative work in the public interests? I’m thinking of, I mean, maybe the National Institutes of Health more broadly, but maybe other smaller agencies that may be attempting to do similar things, but what can they learn from NCI’s experience that you describe in your research?
Aviles: Yeah. So, the National Cancer Institute is a really great example of how to maximize the effectiveness of doing high impact mission-driven research. The reason the National Cancer Institute is so effective at meeting both its scientific and its policy goals has to do both with its structure and with the organization’s culture. The structure that makes policy at the National Cancer Institute so effective is the fact that this is an organization where leadership has to defend its scientific priorities in front of other scientists on advisory boards who have a great perspective on what the scientific community wants. So, they have to be accountable to other scientists and their representatives, but then they also have to be accountable to Congress.
They’re working scientists who have specific expertise and can converse with their scientific advisors, but they also have experience as bureaucrats. They’re having to make sure that policy can reasonably be interpreted as a good use of taxpayer funding.
So, it’s this accountability structure of having to be accountable both to the scientific community and to representatives of the broader populace that I think interacts very well with the mission of the agency. This is what manifests, I think, in the organization’s culture in ways that are really, really effective. The National Cancer Institute has historically promoted from within. So, people who do very well in the intramural program as researchers have opportunities to really be instrumental in the administration of the agency. That means they get to be in the room when these policies are created, and they have this very unique expertise because of that.
They’re career scientists. They’re working scientists who have specific expertise and can converse with their scientific advisors, but they also have experience as bureaucrats. They’re having to make sure that policy can reasonably be interpreted as a good use of taxpayer funding. So, in the book and in this article, I talk about how there’s this unique brand of scientists bureaucrats in leadership. They’re a hybrid role between working scientists and bureaucrats who are deeply involved and invested in the policy outcomes of the agency.
I think to the extent that you can create these hybrid roles in agencies and back them up with this oversight structure where people are continually in dialogue, not just with other experts outside of the agency, but also with representatives of the broader populace. I think it’s the combination of these things that really closely couples the agency’s mission to the policy outcomes at that agency.
Lloyd: That is really interesting. The second way I wanted to talk about the policy implications of NCI’s work and the context in which it occurs is something that you touch on a little bit in the article. Earlier this year, the Supreme Court ruled in a case called Loper Bright v. Raimondo that federal judges will now be the arbiter of what an agency is allowed to do, whether or not they are exceeding their statutory authority. Previously, the judiciary would grant deference to people working at the agencies themselves.
That was something called Chevron deference that had been in place named after a court case, I guess, 40 years ago, was it? Maybe it was 1984. So, I’m curious what this means for an organization or for an agency like NCI. A lot of people are concerned about what it means for regulatory agencies such as Environmental Protection Agency, but what does it mean for an agency, for an organization like the NCI?
Aviles: The National Institutes of Health hasn’t really been a big part of the conversation because the National Institutes of Health doesn’t do a lot of rulemaking. So, most of our discussions of Loper Bright and its implications for scientific expertise in the federal government have really focused rightfully, I think, on regulatory agencies who do a lot of rulemaking. But I think we need to pay very careful attention to what a ruling like Loper Bright can mean for the capacity for mission oriented agencies to actually work as effectively as they can.
We need to pay very careful attention to what a ruling like Loper Bright can mean for the capacity for mission oriented agencies to actually work as effectively as they can.
One of the things that is very distinctive about a lot of the relationships between Congress, which authorizes these agencies and their budgets every year and an agency like the NIH, is that if you look at any given statute that comes from Congress to the NIH, there’s usually not a lot of specification in those statutes about what the exact scientific priorities are that the NIH should be investing in, what instruments they should use when they do this. So, this has meant a lot of autonomy for experts who work for the federal government to actually determine, based on their own scientific expertise, where should we be making our investments? It’s allowed them to be very nimble and very responsive to the state of the science.
So, up until now, this has been a very convenient arrangement that I think has been very conducive to making sure that government science policy is very cutting edge when it issues from these mission-oriented agencies. But we’re in a position now where there is potential for a lot of ambiguity, a lot of ambiguity for when something that comes from Congress should be interpreted as law or as a matter of more routine agency level policymaking. What the descending opinion in the Loper Bright case pointed out is that this is a Pandora’s box when it comes to thinking about executive agencies, and this includes the National Institutes of Health and the National Cancer Institute within it.
We actually don’t know what’s coming next, but what we can see is that Loper Bright is part of a much larger initiative to basically try to scale back a lot of agency autonomy. One of the major takeaways from my work is that actually agency autonomy is really conducive to good science when you have other oversight structures built in that makes sure that that science can be accountable to the broader scientific community. So, I think in this environment of uncertainty, we risk losing some very effective science policymaking instruments because we have created conditions where agencies might be a little bit more timid in actually taking what are often very sparse mandates from Congress and trying to do the best that they can by looking at the state of the science.
So, I think that we also need to pay very careful attention to other federal agencies like the National Institutes of Health when we’re having these conversations about things like Loper Bright, because a lot of the expertise that makes an organization like the National Cancer Institute so effective at both science and policy is the fact that it relies on the expertise of these federal bureaucrats, right? It’s people who work for the National Cancer Institute who helps set the agenda for policy around the country.
Lloyd: To learn more about the National Cancer Institute’s role in creating cancer research breakthroughs, check out Natalie Aviles’s book, An Ungovernable Foe: Science and Policy Innovation in the US National Cancer Institute. Find links to it and much more in our show notes.
Please subscribe to The Ongoing Transformation wherever you get your podcast, and write to us with questions and comments at podcast@issues.org. Thanks to our audio engineer, Shannon Lynch, and producer, Kimberly Quach. I’m Jason Lloyd, managing editor of Issues. Thanks for listening.
How Federal Science Agencies Innovate in the Public Interest
Like many scholars of federal science policy, I awaited the Supreme Court’s ruling in Loper Bright Enterprises v. Raimondo with some trepidation. When it came down on June 28, 2024, the decision overturned the Chevron doctrine, which for four decades held that courts could defer to the interpretations of expert government agencies when congressional statutes are ambiguous. Defenders of the nation’s many expert regulators expressed concern that the Loper Bright decision will hobble the federal government’s ability to protect citizens from potential harms by sidelining the judgment of experts in favor of the courts.
While philosophies of governance vary, there is sound political theory behind the practical strategy of delegating policy decisions to expert civil servants in federal agencies. Legislators and their staffs often lack the technical expertise to fully specify the ins and outs of legislation that deals with complex matters of science and technology governance (so, it is worth noting, do the courts). While much of the discussion around Loper Bright has focused on regulatory agencies such as the US Environmental Protection Agency and the Food and Drug Administration, the potential impact on federal agency authority goes much further.
Many important expert agencies in the federal government are not regulatory at all, but instead are oriented toward missions that involve them in the production and development of science and technology with far-reaching political and economic consequences. Among these mission-oriented federal science agencies, the National Cancer Institute (NCI) offers a compelling illustration of why allowing expert scientists the discretion to broadly interpret their agency’s statutory mission enables not only sound policy, but a brand of scientific innovation marked by a distinct commitment to serving the public good.
The National Cancer Institute offers a compelling illustration of why allowing expert scientists the discretion to broadly interpret their agency’s statutory mission enables not only sound policy, but a brand of scientific innovation marked by a distinct commitment to serving the public good.
The National Cancer Institute is the largest single patron of cancer research in the United States. As part of the National Institutes of Health (NIH), the NCI is chartered with a dual mission: first, to sponsor research into the causes of cancers; and second, to apply the results of such research to the treatment and amelioration of these diseases. Today the NCI is best known for its role in awarding grants to external academic researchers studying fundamental biological mechanisms and testing clinical treatments for cancer. But one of the most fascinating dimensions of the agency’s history can be found in its intramural, or in-house, scientists’ expertise in cancer virus research and vaccine innovation. As a consequence of their scientific successes, many of these intramural researchers rose to positions of bureaucratic leadership, where they participated in administrative governance and crafted policy alongside their day-to-day scientific research.
Wearing both scientific and bureaucratic hats, the NCI’s hybrid “scientist-bureaucrats” came to interpret their research projects through the administrative work they did to serve the NCI’s dual mission, and vice versa. The close interconnection between scientific and bureaucratic practices allowed NCI actors to develop distinct policy expertise that profoundly shaped the trajectory of biomedical research innovation and governance in the United States toward public health-relevant science. In crucial instances, NCI scientist-bureaucrats leveraged the agency’s mission to create new policies and programs to help develop life-saving innovations and distribute them to populations in need.
It is precisely this kind of bureaucratic discretion that the Loper Bright decision (and other movements to limit the authority of federal civil servants) now threatens to undermine. As dissenting justices argued, Loper Bright returns administrative agencies to a pre-Chevron world where routine policymaking could become a “font of uncertainty and litigation”—a world where the policies that led to some of the NCI’s most celebrated bureaucratic and scientific innovations would be impossible.
The policy paradigm of “translational research” in biomedicine—that is, the notion that fundamental “benchside” research should be developed into useful tools for treating patients at the “bedside”—found an early champion in Samuel Broder of the NCI’s intramural Clinical Oncology Program. Early in the HIV/AIDS epidemic, Broder helped lead a collaborative effort between intramural laboratory scientists and clinical researchers to rapidly identify antiretroviral candidates and move them into clinical trials.
Broder’s work was instrumental to the development and testing of nucleoside analogs, the first class of drugs (including AZT and ddI) that effectively combatted HIV/AIDS. Broder considered the presence of so many intramural scientists and clinicians in close proximity to one another on the NIH campus, all working in service to a public health mission, critical to the ability to move knowledge and materials between bench and bedside early in the HIV/AIDS epidemic.
Broder firmly believed that the flow of knowledge between lab and clinic should be bidirectional, and drew upon his experiences in HIV/AIDS drug development to lay the groundwork for a nationwide translational research infrastructure.
When he was appointed director of the NCI in 1989, Broder brought his distinctive philosophy of translational research to bear on the agency’s dual mission to support science and improve health. Broder firmly believed that the flow of knowledge between lab and clinic should be bidirectional, and drew upon his experiences in HIV/AIDS drug development to lay the groundwork for a nationwide translational research infrastructure. Looking at extramural grant mechanisms, to which the majority of the NCI’s budget is allocated, Broder flexed his bureaucratic muscle to forge new funding mechanisms specifically designed to nurture translational collaborations in the academic community. He rehabilitated multidisciplinary P01 project grants, which support groups of investigators, as collaborative translational alternatives to the oft-siloed individual investigator R01 grant. He also oversaw development of the Specialized Programs of Research Excellence (SPORE) grant, which established multidisciplinary centers dedicated to translational research on some of the most common cancers in hospitals throughout the nation. Translational P01 and SPORE grants challenged the status quo favoring basic research and emphasized the NCI’s mandate to ameliorate the national burden of cancer.
Broder also had a vision for translational research in the intramural program. He had been disillusioned over his experience with private industry drug development during the HIV/AIDS epidemic; though Broder and his colleagues had been largely responsible for the development and testing of AZT, the pharmaceutical company Burroughs Wellcome had gone on to claim full patent rights for the drug. Leveraging their control over the patent, Burroughs Wellcome set an exorbitant price for AZT that galled public health advocates—sometimes into overt protest.
Learning from these bitter experiences with AZT drug development, Broder envisioned the NCI as an alternative “‘pharmaceutical company’ working for the public” and not private profit. He pushed for patenting and licensing reforms throughout the NIH that would ensure innovations developed by intramural scientists would be competitively licensed so their prices could be affordable enough to benefit the global populace.
Biomedical innovation as a public good
A test for this vision of intramural translational research came in the form of a new vaccine against the human papillomavirus (HPV). The HPV pathogen is responsible for the majority of cervical cancers and many other anogenital and head and neck cancers. John Schiller and Douglas Lowy of the NCI’s intramural Laboratory of Cellular Oncology made an important breakthrough in 1991, when they demonstrated that one of the virus’s harmless outer proteins could be made into a safe and efficacious subunit vaccine, delivering only small portions of a microbe into the body in order to elicit an immune response without introducing any of the microbe’s disease-causing genes. As Schiller and Lowy noted early in human trials, “an effective HPV vaccine may have a greater potential for reducing worldwide cancer burden than any other currently conceived anticancer program.”
However, producing the vaccine at scale required overcoming several barriers. A major one was that only the private sector has the capacity to produce vaccines at market scale. This meant that Schiller and Lowy’s HPV vaccine—like Broder’s nucleoside analogs before them—would have to be licensed to a private company for commercial development if it would ever see the light of day.
However, Broder’s earlier experiences had influenced the way the HPV patent was developed. Broder and the NIH legal team were determined to improve the government’s approach to intellectual property to better ensure the public, and not merely private firms, would benefit from federal innovation. They developed a patent-licensing policy that stipulated patents developed by federal scientists working at NIH must be licensed to multiple private companies. The principle of nonexclusive licensing for NIH-owned intellectual property was intended to drive down drug prices by putting private companies into competition with one another. At a time when direct government intervention in drug prices was considered a political nonstarter, competitive co-licensing agreements were seen as one of the most effective strategies federal agencies could leverage to lower drug prices while ensuring private companies would translate new discoveries into scalable medical interventions.
Broder and the NIH legal team were determined to improve the government’s approach to intellectual property to better ensure the public, and not merely private firms, would benefit from federal innovation.
Schiller and Lowy’s HPV vaccine technology was co-licensed to Merck and MedImmune (which transferred its license to GlaxoSmithKline soon after human trials began). Schiller and Lowy decided to continue independent research on the vaccine, teaming up with colleagues in the NCI Division of Cancer Epidemiology and Genetics to conduct intramural trials to run in parallel with those conducted by the pharmaceutical companies. NCI scientists were concerned that either of the companies could make business-related decisions to discontinue clinical trials for the vaccine for any number of reasons the NCI inventors had no control over.
Furthermore, Schiller and Lowy suspected that information gleaned from the clinical application of their laboratory’s findings could inform further research that might improve cancer outcomes in the future. Their decision to conduct NCI-sponsored clinical trials thus reflected their investment in the NCI’s dual mission: on the one hand, they believed that knowledge obtained from these trials could improve research on the virus and its disease manifestations; and on the other hand, they wanted to ensure the vaccine would be available as a global public health tool whether industry found it profitable or not.
Whereas the pharmaceutical companies conducted phase II and III trials of their branded HPV vaccines on adolescents in high-income countries, the intramural NCI clinical trials took place in Costa Rica. This location allowed NCI scientists to orient their vaccine research toward the women in low- and middle-income countries who bore the overwhelming burden of HPV-related cervical cancer morbidity and mortality—in part because of the difficulty of providing regular Pap screenings in low-resource settings.
Yet the very things that made routine Pap screening a suboptimal public health strategy in areas that lacked health care infrastructure also frustrated the original HPV vaccine delivery schedule. A recombinant subunit technology that required cold chain storage for three doses to be administered over 9–12 months proved difficult to implement in low-resource settings.
Whereas the pharmaceutical companies conducted phase II and III trials of their branded HPV vaccines on adolescents in high-income countries, the intramural NCI clinical trials took place in Costa Rica.
Recognizing these hurdles, the NCI team found value in tracking women who did not complete the three-shot protocol rather than dropping them from the study, as drug companies are wont to do. Following these women revealed whether the vaccine was sufficiently immunogenic to require fewer shots, thus reducing the vaccine’s burden on patients and providers in low-resource settings. The NCI team’s follow-ups soon demonstrated that even women who received only the first shot of the three-shot protocol showed an immune response sufficient to suggest protection against the targeted HPV strains several years after administration.
Their findings led Schiller and Lowy to advocate for further research into a one-shot protocol for the first-generation vaccine as the most logistically practical and cost-effective alternative for vaccinating populations in developing countries. Based on the NCI’s findings and mounting evidence from other trials conducted in low- and middle-income countries, the World Health Organization recommended adopting a single-shot HPV vaccine regimen in 2022.
Making good on a dual mission
For NCI leadership, the story of HPV vaccine innovation was a translational triumph. It illustrated, according to NCI director John Niederhuber, how “basic discoveries arising from population studies, molecular biology, and immunology can be rapidly translated through public and private research efforts to solve significant public health problems, and in this case, perhaps the elimination of cervical cancer as a threat to women’s health.” The role NCI scientists, clinicians, and epidemiologists played in inventing the enabling technology and testing it in under-resourced populations illustrates how NCI scientists combine a commitment to producing knowledge about cancer with a motivation to improve public health—especially in instances where they believed private interests would prioritize profit over the global population’s needs.
NCI scientists combine a commitment to producing knowledge about cancer with a motivation to improve public health.
At a time when funding for extramural NCI grants is ever-shrinking, some scientists question the value of maintaining an intramural research program that consumes 16–18% of the agency’s budget every year. Yet the value of the NCI’s intramural program is greater than the research it conducts: it also encompasses the training of hybrid scientist-bureaucrats who are able to develop agency policies and scientific projects that would otherwise be impossible. Straddling the standards of the scientific community and the demands of federal science policy, the NCI’s scientist-bureaucrats are vital to the agency’s ability to cultivate science and policy that serve the public interest. By elevating researchers whose work has enhanced public health to leadership positions, the NCI has ensured that congressional investments in science are directed not only toward promising fundamental research, but also toward the ultimate end of improving human health.
Knowledgeable and publicly accountable policymaking is an art that scientist-bureaucrats learn by doing, making the NCI’s intramural program a vital incubator for both the experts and the policies that help make the nation’s cancer research effort thrive. This arrangement is one we stand to lose at our own peril. While science policies are never perfect, they are better when they are informed by the experiences of such mission-driven experts who have committed themselves to the betterment of science and the public good.
Sound Therapy
If given the choice, I will always choose the quiet of nature over the jangle of artificial sounds. Imagine my shock, then, when I developed tinnitus. The onset of this constant sound—a relentless mechanical buzzing in my own head—was close to unbearable.
My angst at experiencing tinnitus was exacerbated because I am a neuroscientist and know that successful interventions are scant. My consultations with family practice and specialist physicians reinforced this dismal reality. In talking with fellow tinnitus sufferers, I quickly concluded that the medical community shrugs its collective shoulders when faced with this condition—despite it negatively impacting the quality of life for more than 740 millionpeople worldwide. This isn’t completely unexpected, however; treating complex, poorly understood neurological conditions characterized by powerful subjective experiences is one of our modern medical system’s shortfalls.
Luckily for me, the Missouri spring brought the hatching of the Brood XIX cicadas. Yes, the cicadas. Their singing perfectly masked my tinnitus and provided relief, allowing me to spend parts of each day tinnitus-free. With time, these periods of relief lessened my overall anxiety about the condition and allowed me to develop adaptations. Searching for information via online communities, I discovered that there is research supporting my anecdotal experience. The frequency of summer insect sounds (e.g., cicadas, crickets, locusts) can provide tinnitus relief. In the music of these insects, I found solace.
If the singing cicadas could give me back my peace of mind, what healing powers can be found in the transcendent sounds comprising the music created by gifted artists? This question was the impetus for a collaboration between the National Institutes of Health (NIH) and the John F. Kennedy Center for the Performing Arts, which involved noted musicians, writers, scientists, and clinicians. One product of the collaboration is a volume of essays, Music and Mind: Harnessing the Arts for Health and Wellness, edited by the internationally renowned vocalist Renée Fleming.
Music and Mind is composed of 41 essays arranged into seven thematic sections that explore, among other topics, the evolutionary roots of humanity’s musical abilities, the use of music in therapeutic and educational settings, and the many ways music enriches our lives. As is often the case with multi-authored volumes, there is an unevenness in prose quality and readability. Such unevenness is particularly evident in Music and Mind because contributions from professional writers such as Ann Patchett and Richard Powers are juxtaposed with essays by researchers and clinicians. While the humanists’ essays are brief, pithy, personal, and powerful, the scientists’ essays tend to be overlong, detailed, and wide-ranging. I kept imagining how the material might have been more compelling if a single talented writer—say, Oliver Sacks, were he still alive—had been commissioned to pull the disparate pieces together.
If the singing cicadas could give me back my peace of mind, what healing powers can be found in the transcendent sounds comprising the music created by gifted artists?
The foreword by Francis Collins, the now-retired NIH director who is both an accomplished scientist and musician, describes the research on the relationship humans have with music as a “still young field.” Perhaps. But this claim is hard to square with a history documenting such connections dating back to antiquity. In a paper published in 2009, the psychiatrist and neurologist Assad Meymandi wrote, “Since ancient times, music has been recognized for its therapeutic value. Greek physicians used flutes, lyres, and zithers to heal their patients. They used vibration to aid in digestion, treat mental disturbance, and induce sleep. Aristotle (323–373 BCE), in his famous book De Anima, wrote that flute music could arouse strong emotions and purify the soul. Ancient Egyptians describe musical incantations for healing the sick.”
In my own experience as a neuroscientist, the connections between music, cognition, and health have long been an active area of research in the sciences. This has been driven by the preponderance of scientists who are also musicians or are discerning and knowledgeable consumers of music, which is a phenomenon Collins also mentions in his foreword. I suspect Collins’s reference to the “young field” emerges from an enthusiasm in the neurosciences for the use of brain-scanning technologies developed over the past few decades that allow scientists to gain information on how human minds interact with music. The “neuroarts” can now take its place among the other neuro- hybrids—neuroaesthetics, neurolaw, neuroeconomics—spawned by the use of functional imaging technologies to map the processing of complex and subjective human experiences.
With imaging technologies, researchers can point to areas of the brain that “light up” when listening to, creating, and moving our bodies to musical sounds. But it’s questionable what this will add to our understanding of the importance of music to human life that is not already evident from behavioral and social sciences. The essays in this volume did not convince me of a need for the neuroarts. The value of the neuroscientific turn in the humanities has been debated for some time and I remain staunchly neuroskeptic. My preference is to see effort and resources spent on studies focusing on richly characterizing how we interact with music in all its complexity and as it unfolds in real time in real-world contexts.
The connections between music, cognition, and health have long been an active area of research in the sciences.
Although the length, technical detail, and unevenness of the prose in the scientific essays is unlikely to make them attractive to nonscientists, there are exceptions. Oddly, it is not until page 434 that the reader is introduced to the fascinating biology of the auditory system and its ability to convert pressure variations in a medium such as air into the wide array of sound perceptions we experience. It might have been helpful if this essay, by the eminent cognitive neuroscientist Robert Zatorre, who has long been interested in how brains process sounds, including music, came a bit earlier in the volume.
The clinician-authored essays raise interesting questions that seem answerable with careful observational studies—no brain scans required! For example, when incorporating music—and “music” in this context includes singing and dancing—as part of treatment programs, there is good evidence of its effectiveness in facilitating movement in Parkinson’s patients, improving memory and mood in people suffering from Alzheimer’s disease, alleviating anxiety during depressive episodes, or building capacity in patients with respiratory conditions. How can clinicians shape these interventions to increase effectiveness or better match patients with interventions? Is it better to use carefully curated playlists that can select for particular attributes (say, rhythm) or allow patients to choose their own music for salience or motivation? Is it more effective to sing alone or in groups? To experience prerecorded or live music? I suspect there are hundreds of observational trials being carried out in nursing homes and senior centers every day—so how might we make better use of such data?
What comes across clearly in the writings by the therapists and clinicians who work with patients are the limits of scientific understanding of music in therapeutic settings, despite its widespread use. The traditional tools of biomedical research, such as biomarker measurements and randomized controlled trials, are not the best tools for investigating multifactor interventions that have widespread physiological and psychological effects.
I suspect there are hundreds of observational trials being carried out in nursing homes and senior centers every day—so how might we make better use of such data?
Among the thoroughly enjoyable humanists’ essays, I was thankful for the contribution by Richard Powers, the award-winning novelist, who writes a brief, lyrical meditation on the classic Irish song of loss, “The Parting Glass.” Powers’s essay gently reminds readers that our brains and bodies are inseparable and exist each moment in a particular context of time and place. It is the wholeness of our experiences that matter.
One concern about the general applicability of the scholarship represented in these essays is that the music at the heart of the scientific research studies, particularly brain imaging studies, tends to be primarily Western classical music. In therapeutic studies, I suspect the music may be more represented by pop music, although this would also likely be drawn from WEIRD cultures (that is, Western, educated, industrialized, rich, and democratic). Notable exceptions are the inclusive musical traditions examined in the essay by the Grammy Award–winning percussionist Zakir Hussain, and in the essay by Marisol Norris and Esperanza Spalding, which offers a rich discussion of Indigenous traditions. A luminous cross-cultural exploration of musical traditions is found in the inspirational prose piece by acclaimed cellist Yo-Yo Ma, reflecting his own efforts to bring diverse musical traditions to global audiences.
If you do happen to come into possession of the book, I strongly recommend prioritizing a reading of Renée Fleming’s introductory contribution, “Overture: Music and Mind.” It’s beautifully written, providing a lyrical and comprehensive summary of the main ideas in the book, together with a plausible rationale for furthering scientific study of music and mind. Ironically, I found it also provided a cogent argument for my view that attempts to reduce an understanding of the powerful effects of music on our bodies, minds, and souls to merely neuroscientific explanations are certain to disappoint.
Cross-Border Collaborations
In “A Binational Journey Toward Sustainability” (Issues, Summer 2024), Christopher A. Scott, Constantino Macías Garcia, Natalia Martínez-Tagüeña, Thomas F. Thornton, and Heather Kreidler highlight the critical role of partnerships in mobilizing the knowledge, action, and resources to advance sustainability pathways for the US-Mexico border region. The authors illustrate how partnerships enable trust building, resilience, and adaptability in a region often characterized as fraught with conflict and social-ecological challenges. Yet partnerships also need support and recognition to thrive. In the binational region, the technical expertise, communication and physical infrastructure, and administrative processes can vary widely across private, civil society, and governmental organizations, and asymmetries can foster competition rather than collaboration.
To leverage the existing organizational capacities, there is a need for investment in cross-border and within-country organizational networks. Such investment could enable organizations to autonomously develop the technical capacities and deepen their relations of trust and collaboration in the contested and highly dynamic binational space. Networks of boundary-spanning institutions and organizations—literally spanning the international border, but also bridging science-policy, citizen-government, Indigenous-settler, private-public divides—are increasingly critical.
Networks of boundary-spanning institutions and organizations—literally spanning the international border, but also bridging science-policy, citizen-government, Indigenous-settler, private-public divides—are increasingly critical.
In a different region of the world, through the Accelerating Adaptation via Meso-Level Integration (ACAMI) initiative in sub-Saharan Africa, we are exploring what makes partnerships of organizations effective in enabling climate change adaptation for small-scale agricultural producers. We focus on “meso-level organizations” that individually and collectively channel material resources, knowledge, finance, and experience between macro-level actors (national governments, international organizations) and local-level beneficiaries. As in the US-Mexico border region, these meso-level organizations demonstrate innovative practices, novel interventions, and valuable experiences for sustainability transformations. They also face constraints: they often lack the time, flexibility, and capacity to retrain and pivot to embrace emergent and evolving challenges in the way they know is most appropriate. Networks of organizations that enable cross-organization learning rarely receive funding from the international community, and are rarely prioritized in national policy efforts.
Findings from the ACAMI initiative suggest that there is a need for investments in the organizational landscape to enable existing and new partnerships to thrive as sustainability challenges evolve. Scaling success requires recognizing the role of organizational networks in producing, synthesizing, and sharing knowledge; efficiently leveraging finance; building capacity across organizations; and influencing policy. Networks can enable the specialization and expertise of some organizations to serve others, reducing the barriers to engage with the frontiers of sustainability science through state-of-the-art data analytics, leading-edge system modeling and knowledge integration, and innovative and ethical collaborative frameworks. More successful networks can strategically manage uncertainty by leveraging complementary member resources and enabling less powerful organizations to contribute. Attention to power asymmetries and alignment of multiple forms of knowledge inherent in sustainability transformations is fundamental. As organizations, their partnerships, and networks assume larger roles in the sustainability space, they also must be accountable to the communities that support them through transparent processes of monitoring and evaluation. Pursuing sustainability goals is thus just as much about complexity in the organizational landscape as it is about the social-ecological challenges of critical regions.
Hallie Eakin
Professor, School of Sustainability, College of Global Futures, Arizona State University
Luis Bojórquez-Tapia
Professor, Laboratorio Nacional de Ciencias de la Sostentabilidad, Instituto de Ecologia, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City
Eric Welch
Director, Center for Science, Technology and Environmental Policy Studies, School of Public Affairs, Arizona State University
Christopher A. Scott, Constantino Macías Garcia, Natalia Martínez-Tagüeña, Thomas F. Thornton, and Heather Kreidler offer a detailed yet concise look at a unique collaboration over the past decade, centered on the US-Mexico border region, by the countries’ academies of science. This work is both engaging and timely, with significant implications for advancing partnerships and sustainability policy in both countries.
The authors emphasize the border region as a shared binational, multidimensional landscape of similarities and differences. They point out that addressing the challenges arising in this vast and complex region—spanning politics, economics, the environment, and more—requires “collaborative approaches that extend beyond immediate geographical boundaries and across scientific disciplines.”
As a key point along the emerging path to partnership, the authors cite the 2020 report Advancing United States-Mexico Binational Sustainability Partnerships. The report reflected an inclusive reach, involving experts from the US National Academy of Sciences, Engineering, and Medicine and the Mexican Academy of Sciences, Mexican Academy of Engineering, and National Academy of Medicine of Mexico. It also featured wide vision, encompassing the complexity of the border landscape within the comprehensive context of binational sustainability policy challenges.
Leadership and personal relationships are particularly valuable, perhaps even more so than formal mechanisms of collaboration.
One noteworthy point the authors raise is the role of the border as an effective venue for science diplomacy. As seen in other arenas of the US-Mexico border region, cooperation and coordination are possible where there is commitment. Leadership and personal relationships are particularly valuable, perhaps even more so than formal mechanisms of collaboration. In terms of policy science, this level of border diplomacy is exemplified by the efforts of the International Boundary and Water Commission/Comisión Internacional de Límites y Aguas, a joint organization that is responsible for applying the boundary and water treaties between the United States and Mexico and settling differences that may arise in their application.
I would further like to emphasize how important the efforts the authors describe have proved as hubs of knowledge and human relational capital. I have been fortunate to participate in some of the initiatives, such as the workshop in San Luis Potosi, Mexico, in May 2018. This gathering featured in-depth scientific and policy discussions focused on drylands in the border region. Equally important was the opportunity to reconnect with colleagues and friends from both countries, with whom I had developed both professional and personal relationships over several years. I am convinced that this relational aspect has been one of the significant accomplishments of the journey initiated by the academies and should be nurtured in the years to come.
Ismael Aguilar-Barajas
Professor of Economic Development
Research Associate, Water Center for Latin America and the Caribbean
Tecnológico de Monterrey, Mexico
The Lives of Lewis Thomas
I. Memory
Back in 1982, as I was preparing to enter Cornell University’s medical school, I was thrilled to learn that the physician-humanist and writer Lewis Thomas (1913–1993) would give us a lecture during our first year. I had read The Lives of a Cellduring high school and remembered staring at his photo on the book’s dustcover. Clad in a lab coat, Thomas leaned toward the photographer, peering out behind his tortoiseshell glasses at the future. I imagined he was moving forward. Inspired, I brought my copy of his new memoir, The Youngest Science, to the lecture with the intention of getting his autograph.
The early ’80s were a confident time, when many people still believed that medicine could cure all ills. With the advent of ever more potent antibiotics and the rise of molecular medicine after World War II, there seemed to be no limit to medicine’s mid-century promise. In those days, before the extent of the AIDS epidemic was fully understood—and long before COVID-19—people spoke seriously about the end of infectious disease as a specialty.
Standing at the podium in his white coat, facing the crowd of first-year medical students, Thomas said that if he had his druthers, he would spray the room with the influenza virus. Most of us had not been sick, and to be a doctor, he said, you needed empathy; for that you had to have experienced illness. He reminded us that we had all chosen to come to the school that day. Then, pointing up from the lecture hall toward the hospital, he said that none of the patients chose to be there—our future workplace was ultimately a place of sickness.
That lesson helped shape my career as a physician and bioethicist, as well as my sense of the fragility of life and the obligations of care. I thought often of Thomas’s warning when the COVID pandemic was at its worst. When it was my turn to lecture Cornell’s first-year medical students on Zoom during the pandemic, there was no need to threaten to spray the room with influenza.
To be a doctor, he said, you needed empathy; for that you had to have experienced illness. He reminded us that we had all chosen to come to the school that day. Then, pointing up from the lecture hall toward the hospital, he said that none of the patients chose to be there—our future workplace was ultimately a place of sickness.
But as I started to research a biography of Thomas, I wondered whether my recall of his lecture might have been a false memory. Was it a conflation of recollections? But then the peculiarities of his language and his use of that word, “druthers,” crept in. It wasn’t something you heard often. And when I interviewed his daughter, the writer Abigail Thomas, in 2023, she used the same word. Druthers.
Further confirmation came in his papers: 160 boxes nested away in Princeton University’s Firestone Library. I was in the archives this past spring, three levels down, at a green leather desk under a skylight that turns the reading room into a scholarly greenhouse. There, I found a typed manuscript called “Getting a Grip on the Grippe,” which eventually appeared in the January 1982 issue of Discover magazine.
An ode to the genetic cleverness of viruses, the essay is written by a secret admirer who mournfully anticipates pathogens’ demise at the hand of molecular medicine. Then comes the unexpected and paradoxical pivot—the literary device that makes Thomas’s essays so thrilling to read—where he asks, “Do we really want to get rid of the grippe?” Just a half century earlier, before the dawn of antibiotics, he explains, “children came to understand something about the hazards of living, and about mortality, at first hand, part of growing up. It was an aspect of everyday experience.” Their friends and neighbors might have died of septicemia, meningitis, or lobar pneumonia, and they lived under the specter of infectious diseases akin to how “cancer is feared today.” Insulated from daily encounters with illness and death by antibiotics and modern medicine, he argues, we lost the sense of frailty that built empathy among us as a society and served as glue between doctors and patients.
Thomas worried that a new generation of young doctors had “no idea what it is to be ill.” And then he quips, “it might be a good idea, several times in the academic year, to release an aerosol of grippe virus into the lecture hall during, say, the course in molecular biochemistry.” He suggested that medical students could “volunteer to keep working through the days and nights of the illness, not taking to their beds at all, in order to glimpse what it is like not to be cared for, a very handy kind of knowledge for any doctor.” It is the kind of moral message medical students rarely hear nowadays.
Now that my own memories have been confirmed, I’ve had to reckon with the fact that the memory of Thomas’s contributions to medicine and the broader humanities has largely been lost. When I tell doctors my age that I am working on his biography, it is not uncommon for them to tell me that they decided to go to medical school after reading The Lives of a Cell. But when I talk to a younger generation, I’m met with blank stares. During my time researching in Princeton’s archives, I also taught a bioethics class with many molecular biology majors. No one in my seminar recognized his name. When I asked where their department was housed, a student answered “Thomas Lab,” and there was a murmur—Oh, you’re writing a book about that guy.
Lewis Thomas. Photograph by Bernard Gotfryd, courtesy the Library of Congress Prints and Photographs Division.
As his biographer and as a doctor, I don’t want Thomas to be forgotten. He was not only a writer; he was a leading scientist in the mid-century shift to molecular medicine, and he combined the two with a moral prescience that is worth revisiting. A bridge to the pre-antibiotic era, he was able to embrace the progress he witnessed with both enthusiasm and skepticism. That subtle mix feels jarring when juxtaposed against the heroic myths about medicine’s rise during the twentieth century that are told today. But it is also revealing.
As an experimental biologist, Thomas the scientist was impossible to pigeonhole. Trained as a neurologist, he chaired departments of pediatrics, pathology, and medicine, and he advanced the idea of “immune surveillance” as a defense against cancer in 1959. He was dean at New York University and Yale University medical schools and later served as president and chancellor of the Memorial Sloan Kettering Cancer Center. For a decade he wrote a regular column in the New England Journal of Medicine called “Notes of a Biology Watcher,” which was eventually collected into The Lives of a Cell and The Medusa and the Snail, both National Book Award winners. When he won the Albert Lasker Public Service Award in 1989, he was celebrated as the “poet laureate of twentieth-century medicine.”
Thomas might appear to be a one-man bridge straddling scientist and novelist C. P. Snow’s two-culture divide, because he was equally comfortable in the sciences and the humanities. But he actually disagreed with Snow’s distinction between the culture of the scientific establishment and that of the humanities. For Thomas, there was just one unified culture. I think it was that stance that made him a singular narrator of the rise of US science in the postwar period. Today that story is often told in hindsight, as if inevitable: a triumphant dotted line from penicillin to the atomic bomb to the polio vaccine to the rise of genomics and mRNA vaccines. But spending time in Thomas’s archives has given me access to his poetic exploration of the murkier parts of that journey: the dread; the moral uncertainty; and the biologist’s need to understand coupled with the doctor’s obligation to heal. Thomas understood intimately that this was not just a tale about how science was advancing—it was also a story about who we were becoming.
II. Poetics
Being a scientist and a poet were vitally intertwined for Thomas, not only for expression, as you might expect, but also for inspiration. “We must rely on our scientists to help us find our way through the near distance, but for the longer stretch of the future we are dependent on the poet,” Thomas wrote in an unpublished essay.Although we think of observation as critical to science, there is much to learn from the poet, who can teach us “to question more closely, and listen more carefully.”
As his biographer and as a doctor, I don’t want Thomas to be forgotten. He was not only a writer; he was a leading scientist in the mid-century shift to molecular medicine, and he combined the two with a moral prescience that is worth revisiting.
For Thomas, poetry was an ethereal laboratory residing in the imagination. There, he opined, “the skill consists in [the poet’s] capacity to decide quickly which things to retain, which to eject. He becomes an equivalent of a scientist, in the act of examining and sorting the things popping in, finding the marks of remote similarity, points of distant relationship, tiny irregularities that indicate that this one is really the same as that one over there only more important.” Thomas noted that “a poet is, after all, a sort of scientist in which nothing is measurable. He lives with data that cannot be numbered, and his experiment can only be done once. The information on a poem is, by definition, not reproducible. His pilot runs involve a recognition of things that pop into his head.”
And then he used a metaphor to transform the poetic into the physical: “Gauging the fit, [the poet] will miraculously place parts of the universe together in geometric shapes that are as beautiful and balanced as crystal.”
III. A fine war
In unpublished letters the Thomas family graciously shared with me, I have been able to read a more cautious history of science’s many leaps. In 1938, for example, Thomas was already worried about the coming war. As an intern just out of medical school, he wrote to his future spouse, Beryl, “There is going to be a fine war and that will be the end of bipeds.” It’s a curious phrase that puts geopolitics into an evolutionary context. But this particular biological spin on a historical premonition reveals a preoccupation with annihilation that would span his lifetime.
Thomas’s concerns continued during World War II, when he ended up with a front-row seat at the dawn of the atomic age. He was deployed in the Pacific as a Navy doctor and tasked with hunting down and studying tropical diseases that could fell the troops. In 1942, the lab of prominent Rockefeller Institute virologist Tom Rivers, where Thomas worked, was inducted en masse into the Navy and rechristened the Naval Medical Research Unit 2 (NAMRU-2). The lab was deployed to the Pacific in 1944, working on Guam and later Okinawa.
During his time in the Navy, Thomas kept a nightly ritual of writing to Beryl, whom he married in 1941. The correspondence, often written under the single electric bulb in the tent he shared with other researchers, was witty, romantic, and literary. When Thomas got mail from Beryl, who lived with their two daughters across the street from New York’s Rockefeller University, he would walk to a nearby clearing, sit on the stump of a felled coconut tree, and read. He was kept company by NAMRU-2 sheep who had also made the voyage to the Pacific. As he passed the small flock, he would whistle Beethoven’s Sixth—which the sheep preferred over Brahms, he wrote Beryl, because it was the Pastoral.
“There is going to be a fine war and that will be the end of bipeds.”
In one letter Beryl shared a question from their young daughter Abby: “Does God wear a watch?” Thomas replied, “It’s a very important question,” one “that keeps coming back into my head, the same way it does yours, and stopping all my thoughts dead in their tracks.” Abby’s query had tapped into Thomas’s preoccupation with temporality, a theme which came to play a key role in his emerging cosmology—where biological evolution is bent by future technological innovation.
These ideas would shape his post-war life as a public intellectual and commentator on mid-century medicine, which he fittingly characterized as The Youngest Science in his 1983 memoir. Worrying about war and the future of humanity, he suggested biological adjustment as a theory for “why people go through all this.” Overall, he had faith in our species, if not the individuals who comprise the collective. Time was an elixir—and a reason for optimism. He described war and cataclysm as biological adjustment: “the kind of thing that all species have to go through once in a while, every million generations or so, in times of crisis which they have created for themselves, and because they are a species and alive they always work it out one way or another, and if they are [a] dominant species with good nervous systems they usually work it out well in spite of themselves and in ways that they can’t possibly have foreseen, and sometimes it happens overnight.” The idea of biological adjustment helped Thomas contemplate the deeper forces of nature that could be at play in human crises, giving him hope for the future despite his forebodings. “It may take a long time but I have great faith in biology.”
IV. The damned bombs
The atomic bombing of Hiroshima on August 6, 1945, would put Thomas’s optimism to the ultimate test. By then he was stationed on Okinawa investigating an outbreak of Japanese B encephalitis and watching the influx of troops gathering to invade the home islands of Japan.
He was weary of war, the jungle, and jeeps. But on August 7 he wrote, “I think it will be finished very soon now. Now I really do think so. The radio just gave the first news about the new bombs and I think now it will be very soon.” And he added, “Even without the damned bombs I think it will be soon.”
Unlike some GIs who saw the bomb as a ticket home, Thomas was more circumspect about the destructive potential of the new weapons than excited about the war’s conclusion: “We are hearing rumors all over the place about that damned bomb—I remember an article in Harper’s in 1939 about that stuff, filled with the gloomiest of predictions about what would happen when it got loose—my God what a business—I do wish the war would end right now.”
That Harper’s article (actually published in 1940) heralded the potential of atomic energy but also suggested the explosive power of its intentional misuse. The essay made a profound impression on Thomas, so much so that in 1941, he published his poem “Millenium” in the Atlantic:
It will be soft, the sound that we shall hear When we have reached the end of time and light. A quiet, final noise within the ear Before we are returned into the night.
A sound for each to recognize and fear In one enormous moment, as he grieves— A sound of rustling, dry and very near, A sudden fluttering of all the leaves.
It will be heard in all the open air Above the fading rumble of the guns, And we shall stand uneasily and stare, The finally forsaken, lonely ones.
From all the distant secret places then A little breeze will shift across the sky, When all the earth at last is free of men And settles, with a vast and easy sigh.
Long before the United States dropped the atomic bombs, Thomas had been using poetry to think about what it meant. He admitted his confusion, perhaps ambivalence, about how the war was coming to an end. Unlike many of his peers, he understood that it also meant the end of an era in human history. Nothing would ever be the same, and he shared this with Beryl: “It’s hard to think about clearly.”
The very week US forces dropped the bomb on Hiroshima, Thomas identified the virus that caused Japanese B encephalitis. But instead of celebrating, he lamented, “The only reaction I can feel is what the hell am I doing working on encephalitis and getting excited along with everybody else about it when a thing like that bomb is loose.”
“We are hearing rumors all over the place about that damned bomb—I remember an article in Harper’s in 1939 about that stuff, filled with the gloomiest of predictions about what would happen when it got loose—my God what a business—I do wish the war would end right now.”
NAMRU-2’s accomplishments were a taste of the strides medicine would make in the second half of the twentieth century, but they were barely a footnote to the power of the atom. The medicine Thomas and his mates sought to advance was powerless against forces like the weapon that was dropped on Hiroshima. Two days afterward he asked Beryl, “What will cure a bomb sweetie?? What will? I don’t like it.”
Yet, invoking his theory of biological adjustment, he turned more hopeful. “Maybe if it’s all true, it will turn out to be a good thing,” he wrote. “It ought to mean the permanent end of all wars, if we’ve got even a grain of sense left after this one.”
He was torn by his desire to finally go home and wondered if the bombs would provide the means by which that might happen. “I want to come home now. Maybe this will end the war in a few days. Everybody is talking as though it will.” Then, pleading for wisdom or deliverance, it’s hard to tell which, he implored, “God. Come on God, wherever you are. Come on come on wherever you are….” And when he heard another rumor that same day that the Russians had declared war on Japan, he confessed, “God what a hopeful day.”
An avowed secularist, it is the only time in the letters that Thomas invokes a deity. He was struggling with the bomb and perhaps his faith, wondering out loud about his species and the destructive forces that lurk within. These fears haunted him as a young man, and they would follow him decades hence, when he advocated for nuclear disarmament.
V. The quiet, final noise
Later in life, Thomas thought often of the “forsaken, lonely ones” whom he had conjured in his poem. During the Johnson and Nixon administrations, he served on the President’s Science Advisory Committee (PSAC) where he warned of the “immediate hazard of nuclear warfare.” He was so concerned about atomic weapons, he thought that as a biologist, his spot would be better utilized by scientists who understood the peril at hand. In his memoir, he wrote, “If it were up to me, I would leave off the medical people and the biologists, or perhaps have them there as a small minority, and I would load them up with the best physicists in the United States.”
Thomas became a very active and vocal opponent of nuclear arms, collaborating with International Physicians for the Prevention of Nuclear War and corresponding with arms control negotiators such as Paul Warnke. In his 1984 book, Late Night Thoughts on Listening to Mahler’s Ninth Symphony, he worried about nuclear annihilation. Invoking the “quiet, final noise within the ear” of a postapocalyptic earth he depicted in “Millennium,” he wrote that Mahler’s fourth movement was “as close as music can come to expressing silence itself.” He clearly loved the symphony; particularly a passage at the end when fading violins “are edged aside for a few bars by the cellos.” He wrote, “I used to hear this as a wonderful few seconds of encouragement.” The cello section signaled rejuvenation to him: “We’ll be back, we’re still here, keep going, keep going.”
Thomas used music as a muse for the deep despair that science, and even poetry, could not express.
But then Thomas noted that he had a pamphlet on his desk talking about the basing of the multiple-warhead MX missile, with each warhead capable of creating “artificial suns able to vaporize a hundred Hiroshimas.” Reflecting on such destructive power changed how he heard the music, as he put it, “making the Mahler into a hideous noise close to killing me.” The cellos, once harbingers of hope, now evoked a missile launch and “the opening of all the hatches and the instant before ignition.” Thomas used music as a muse for the deep despair that science, and even poetry, could not express.
VI. Nuclear winter
In 1986, in the foreword to the Institute of Medicine and National Academy of Sciences (NAS) report The Medical Implications of Nuclear War, Thomas cautioned that “if we go on this way, unthinking, putting it out of our minds,” civilization would be “gone without a trace. Not even a thin layer of fossils left of us, no trace, no memory.” Elsewhere he put these worries in the context of personal history, imaging what it would be like to be 16 again and contemplating a future that might not happen. Writing of the generations that came before him and his Welsh forebearers inscribed in the family Bible, Thomas confessed that when he was a teen, it “never crossed my mind to worry about the twenty-first century; it was just there, given, somewhere in the future.” But now humanity had the ability to end human time.
As a physician who had overseen hospitals and medical schools and as a communicator of science, Thomas sought to dispel any hope that the medical infrastructure could “cure a bomb.” There was no such thing as medical salvation from nuclear confrontation. After heralding medicine’s collective progress with bone marrow transplantation and burn and trauma surgery, Thomas got real. There would be no point in caring for “men, women, and children with empty bone marrows and vaporized skin.” Hospitals would be “subject to instant combustion” and, if unscathed, would only be able to “salvage at their intact best” hundreds—not hundreds of thousands—of victims. He bluntly stated the futility: “As the saying goes, forget it.”
When the New York Times reported that Russia was holding drills on the use of tactical nuclear weapons in its war with Ukraine this past May, I thought of Thomas. In a 1982 essay that could have been in the Times’ op-ed pages today, he wrote, “even the neatest and cleanest of nuclear weapons, launched from either side, is not warfare in any familiar sense of the term.… Once begun, there will be no pieces to pick up, no social system to regroup and reorganize, nothing to command.”
Thomas’s musings on humans as a species went beyond political science or sociology, heralding a speculative biology that prompts us to more closely examine our collective actions.
Thomas spent the last decades of his life asking the American public to think the unthinkable. In the foreword to the NAS report, he implored journalists to overcome their misgivings about publishing articles that people would rather not read. “I raise my voice, yell sometimes, what the hell are newspapers for? You’re supposed to provide information, real news, and this is … the news of the end of the world, print more of it for God’s sake before it’s too late.” He continued, “Put the nuclear winter up there on the front page every day, give it the blackest headlines you’ve got, make it the main story, run it and run it.”
VII. Coda
With poetic vision and scientific precision, Thomas anticipated the complexity of our time through the lens of his own. While he heralded the hope of molecular medicine, he also lamented the dawn of the nuclear age. His New York Times obituary described him as “evolution’s most accomplished prose stylist,” a tribute that honored his writing as well as his thinking about our place in nature. Thomas’s musings on humans as a species went beyond political science or sociology, heralding a speculative biology that prompts us to more closely examine our collective actions.
Thomas was willing to explore what he didn’t know, what he couldn’t know, inspiring new ways of knowing, experimenting with ideas and language to address complexity—in science as in society. His life and work are worth remembering and revisiting as a guide to our own.
Who Is Responsible for AI Copyright Infringement?
Twenty-one-year-old college student Shane hopes to write a song for his boyfriend. In the past, Shane would have had to wait for inspiration to strike, but now he can use generative artificial intelligence to get a head start. Shane decides to use Anthropic’s AI chat system, Claude, to write the lyrics. Claude dutifully complies and creates the words to a love song. Shane, happy with the result, adds notes, rhythm, tempo, and dynamics. He sings the song and his boyfriend loves it. Shane even decides to post a recording to YouTube, where it garners 100,000 views.
But Shane did not realize that this song’s lyrics are similar to those of “Love Story,” Taylor Swift’s hit 2008 song. Shane must now contend with copyright law, which protects original creative expression such as music. Copyright grants the rights owner the exclusive rights to reproduce, perform, and create derivatives of the copyrighted work, among other things. If others take such actions without permission, they can be liable for damages up to $150,000. So Shane could be on the hook for tens of thousands of dollars for copying Swift’s song.
Copyright law has surged into the news in the past few years as one of the most important legal challenges for generative AI tools like Claude—not for the output of these tools but for how they are trained. Over two dozen pending court cases grapple with the question of whether training generative AI systems on copyrighted works without compensating or getting permission from the creators is lawful or not. Answers to this question will shape a burgeoning AI industry that is predicted to be worth $1.3 trillion by 2032.
Yet there is another important question that few have asked: Who should be liable when a generative AI system creates a copyright-infringing output? Should the user be on the hook? Shane only requested a generic love song, not “Love Story” or one like it. What about the provider of the AI tool? It is not in Anthropic’s interest for Claude to produce an infringing song, and the company likely took measures to avoid infringing outputs.
I propose that neither option is desirable. Instead, the AI system itself—as a fictitious legal person—should be the copyright infringer. Any human liability for infringement should be determined on a more nuanced secondary liability basis. While this is a viable approach under copyright doctrine, it also suggests that we can more broadly reimagine the role of AI as the perpetrator to respond to the unique legal conundrums posed by increasingly autonomous systems.
Who caused the machine to infringe?
Some AI-generated outputs will undoubtedly infringe copyright. Scholars including Matthew Sag, James Grimmelmann, and Timothy Lee have already shown how generative AI systems can produce images that infringe on copyrighted content such as Peanuts’ cartoon beagle Snoopy, Nintendo’s Italian plumber hero Mario, and Banksy’s iconic Girl with Balloon mural. In March, the Guangzhou Internet Court in China issued the first ruling involving copyright infringement and generative AI, finding that the output at issue infringed the copyright for Japanese superhero Ultraman. (In that case, the court found the AI provider liable for the infringing output.)
The AI system itself—as a fictitious legal person—should be the copyright infringer. Any human liability for infringement should be determined on a more nuanced secondary liability basis.
The question of who should be liable is especially challenging because AI systems are black boxes. Developers and users are not able to precisely predict or explain why a particular output occurs. The US Copyright Office has rejected copyright registrations for AI-generated works on the basis that, due to the black box, the AI system—not the developer or user—is creating the specific expressive output. The US Patent and Trademark Office has similarly rejected the notion that an AI system can be an inventor for purposes of patent law. The unpredictability and semiautonomous nature of AI has led many commentators to suggest that a paradigm shift in the law is needed for the AI era.
This is not the first time copyright law has encountered difficult questions of liability in the face of emerging technologies. From the printing press to search engines, complex machines have posed new questions for who should be liable for resulting infringements. In most cases, if infringement occurred, the infringer has usually been obvious. Whether it was the painter of the art, author of the book, or copier of the music, there’s been a clear line from infringer to infringement. In these corporeal infringement cases, the identity of the infringer was a foregone conclusion because the infringement occurred by the person’s own hand.
As machines became more complex, however, multiple parties became involved in the copying of copyrighted works, which defied this straightforward understanding of liability. For example, posting infringing content online involved the user who posted the content, their internet service provider, the website to which it was posted, and the internet service provider for that website. A remote digital video recording device for taping television programs required both the system that facilitated the recording and the user who selected the specific programming for later viewing. In these and other mechanical infringement cases, courts introduced a new term to copyright law: volition. Volition asks who willed or caused the infringement to occur—or, as one court put it, “who actually presses the button”? Although volition was always part of copyright law in the background, courts had to bring it to the fore to address the complications of mechanical infringement. With the aid of volition, courts in both these examples determined that the user was the infringer because the user—not the service provider—caused the infringement to occur.
Users or developers?
I propose that infringement carried out through generative AI—which I term artificial infringement—is merely the third stage in this liability evolution. Courts can use the volition requirement, which helped them solve the liability challenges of mechanical infringement, to determine who should be held liable for AI-generated infringements as well.
There are scenarios where it seems fairly likely the user or the developer caused the AI system to infringe. Users may have specifically tried to make the AI system generate infringing outputs or have even engaged in adversarial machine learning aimed at circumventing the system’s safeguards against infringement. It is also possible that the developer of the AI system offered a system that is almost guaranteed to infringe most of the time, either due to malintent or poor design.
But in most artificial infringement cases, it will be difficult to show that a human acted with volition. The creation and launch of an AI system involves many individuals. Most providers of AI systems actively seek to limit instances of infringing outputs. This is why, for example, you cannot prompt some generative AI systems to create an output in the style of well-known artists such as Pablo Picasso or Salvador Dalí. Users also generally do not intend infringing works to be generated or, like Shane, are unaware when outputs are infringing.
This is not the first time copyright law has encountered difficult questions of liability in the face of emerging technologies.
We could ignore this complex reality and make the AI system provider or user automatically liable for all infringements. But this is undesirable for several reasons. First, such a rule would ignore legalprecedent that, without some causation or intent, providers are generally not liable merely for offering a product or automatic service that has substantial noninfringing uses. Second, the Copyright Office has already refused to register copyrights in AI-generated outputs because the developer and user do not have sufficient control over the expression, which also forms the basis for the infringement. Third, imposing liability on developers for all resulting infringements despite their best intentions could deter new market entrants, inhibiting robust competition and solidifying the positions of wealthy market leaders such as OpenAI and Meta. Finally, absolute liability would not necessarily deter infringement because of the lack of precise control over the AI black box.
AI systems as copyright infringers
A proper volition analysis instead suggests a novel solution: courts should consider the AI system, as an artificial legal person, to be the infringer. The AI system is what is determining the expression in a particular output. While the developer provides the infrastructure and the user the prompt, the actual determination of whether an output will contain infringing content is made inside the AI black box. The AI system is not human, of course, but it is not unprecedented for the law to confer legal personhood on other nonhuman entities, including corporations and pet animals. Similarly, the law could confer legal personhood on the AI system so it can be held liable for copyright infringement. This legal fiction would make the AI system the direct copyright infringer.
Holding the AI system liable does not mean that there is no liability for the developer or user, or that copyright owners cannot recover for infringements because the AI system does not have financial resources. Developers or users can be secondarily liable for the AI system through their own actions. The law often holds other parties liable for the wrongs of another. For example, employers are responsible for actions their employees carry out in their line of work. Online service providers are liable for copyright infringement when they know of a specific infringement and do not remove it.
The AI system is not human, of course, but it is not unprecedented for the law to confer legal personhood on other nonhuman entities, including corporations and pet animals.
In the AI context, the developer could be held liable if they knew about and “materially contributed” to the infringement. This could take the form of what I term a notice-and-revision test. Under this test, if the developer learns of a specific infringement problem (say, Shane’s lyrics), it would then be obligated to take remedial action to prevent similar infringements from reoccurring. Providers or users could also be on the hook if they induced or encouraged the AI system to infringe. These approaches look to whether the developer or user intended infringement to occur or be furthered, rather than imposing absolute direct liability or a type of principal-agent relationship that would also result in absolute liability for infringements.
AI liability reimagined
By reimagining the AI system as the copyright infringer, courts would not only be faithful to the law but can also have a more nuanced discussion about who else should be held responsible when generative AI infringes. This achieves the purpose of the law by deterring foreseeable, unlawful conduct. The approach would punish bad actors rather than imposing de facto liability for the mere provision or use of an AI system that also has noninfringing uses. And it has the added benefit of fitting within the historical arc of copyright law remaining flexible enough to adapt to new technologies.
While making the AI system the direct infringer is appropriate under copyright doctrine, this proposal can have broader policy implications for thinking about legal liability in the AI era. AI offers tantalizing benefits, but at the cost of control. To realize the promise of a technology that is valuable precisely because of its increasing autonomy, courts need to consider shifting away from always imposing strict liability on providers or users. Providers can arguably only do so much to prevent harmful outputs ex ante, or before they happen—although tools such as information lattice learning, which attempt to map the black box, could help trace why a particular output occurs. Industry practices, including filtering certain prompts or outputs and reducing the percentage of outputs that reproduces a particular piece of training data, are important practices for countering AI-generated harms. However, a provider cannot predict and evaluate every potential user prompt and output in advance. This lack of predictability is exacerbated by the iterative nature of machine learning. Shifting the focus of law to enable ex post measures, such as notice-and-revision, may avoid imposing unduly high costs on would-be market entrants while still holding human actors liable when they are ill-intentioned.
Such an equilibrium between absolute and nuanced liability may require creative lawyering as well as judging. My proposal for copyright law would require a novel application of direct liability and a refinement of secondary liability. This allows copyright law to remain flexible and adapt to AI and other emerging technologies, which will continue to evolve and strain the bounds of copyright and other areas of the law. It also puts intent front and center by punishing AI providers and users for engaging in bad-faith actions that facilitate infringement. Together, this strikes a balance between regulating and encouraging new technologies that is ultimately aimed at benefitting society.
Considering “Community”
In “Bringing Communities In, Achieving AI for All” (Issues, Summer 2024), Shobita Parthasarathy and Jared Katzman’s call for making community concerns the focus of meaningful AI development is important and timely. In particular, their emphasis on the role of universities in this effort rings true to me. In fact, I would argue that we—university educators and scholars who have access to resources and power—have a responsibility to consider what technological innovation and progress mean for how we envision our collective futures. We can leverage that responsibility to address the issue that “community” remains a vague (albeit en vogue) term. When we say community, we must specify who we mean, who is convening that community, and in what way.
A core community in the world of (higher) education and in the academic profession is the student body. Today, education is conditioned on the enrollment of students in technology systems, increasingly those that are artificial intelligence-enabled: learning analytics platforms, classroom surveillance technology, proctoring software, automated plagiarism detectors, college admissions algorithms, predictive student advising software, and more. Students are constantly surveilled and have no way of knowing about or refusing to get enrolled in AI systems—even though the pervasiveness of large-scale data collection and predictive analytics can affect their lives far in the future. At the same time, student power in the United States has been rolled back significantly over the past three decades. As a highly affected and often marginalized community in the university setting, students are structurally and culturally excluded from having a say about AI, because they generally have no say, regardless of how well they are organized as a community.
When we say community, we must specify who we mean, who is convening that community, and in what way.
This tale holds a lesson: when we ask for communities to be brought in, we must also ask under what conditions. I profoundly agree that regulators play a crucial role in ensuring equity in AI, in all the important ways that Parthasarathy and Katzman describe. But I also note that engaging with constituents comes naturally to politicians and civil servants. It doesn’t to industry leaders, including leaders in the tech industry or in education. As educators, we can help change that. In the classroom, we can work toward the socially committed AI research that the authors place at the heart of equitable AI. Adjacent and outside of the classroom, we can support our local community of students in organizing around technology and policy issues as they pertain to their immediate educational environment. And we can help them work to establish structures and processes that institutionalize student power—or community power—in the context of technology governance. One such structure is the idea of a student technology council, a small group of students that represents the student body on positions about campus technology and governance and that actively participates in administrative decisions on technology procurement and implementation. This may have a signaling effect to AI vendors whose biggest clients are large educational institutions.
We have a long way to go from idea to community-led deliberation and implementation. But thinking about student-driven governance of AI provides an opportunity to create more permanent structures around community engagement on AI that push beyond individual projects and allow us to get very concrete on community needs and desires.
Mona Sloane
Assistant Professor of Data Science and Media Studies
University of Virginia
The Hidden Engineering That Makes New York Tick
New York City is the perfect place to understand the importance of modern engineering, but the most valuable lessons won’t be found at the Empire State Building or in Central Park. To truly discover what makes modern life tick, you have to look at the unloved, uncelebrated elements of New York: its sewers, bridges, and elevators.
On this episode, host Lisa Margonelli talks to Guru Madhavan, the Norman R. Augustine Senior Scholar and senior director of programs at the National Academy of Engineering. Madhavan wrote about the history of this often-overlooked infrastructure in a trilogy of Issues essays about New York City’s history. He talks about how the invention of the elevator brake enabled the construction of skyscrapers and how the detailed “grind work” of maintenance keeps grand projects like the Bayonne Bridge functioning. He also highlights the public health and sanitation-centered vision of Egbert Viele—the nearly forgotten engineer who made New York City livable.
Lisa mentioned riding on a tugboat pushing a barge full of petroleum, but she misremembered! The repairs were occurring on the Goethals Bridge, not the Bayonne. Here’s the whole story of “A Dangerous Move” from the New York Times.
I’m Lisa Margonelli, editor-in-chief at Issues. In this episode, we are taking a field trip to New York City. Our guest and guide is Guru Madhavan, the Norman R. Augustine senior scholar and senior director of programs at the National Academy of Engineering. On our tour, we’re skipping Times Square and the Empire State Building and looking instead at the undercover, uncelebrated and, actually, totally fantastic history of the infrastructure of modern life: the sewers, bridges and elevators that make life possible in New York City. Guru wrote about this history in a series of three pieces for Issues and I’m delighted to talk to him about this underground history today.
Guru, welcome!
Guru Madhavan: My pleasure.
Margonelli: Guru, we’re here to talk about a series of articles that you wrote for Issues all about the unexpected, underground history of engineering in New York City and we called it Guru Madhavan’s New York Trilogy after, I think, the Paul Auster trilogy. But what I want to talk about today is some of this undercover, unloved, uncelebrated, fantastic history of engineering in New York City. But to get there, I think we need to start with how you actually got interested in engineering. Tell me what attracted you to engineering and how did you start your career?
My entry into engineering had nothing to do with an earlier romance with some chemistry sets or robotic kits or whatever. It was very much like an arranged marriage.
Madhavan: There are two parts to how I got to engineering and how I got to New York City, so I think they’re interlinked as well. I grew up in a family in South India, an orthodox Hindu family and I’m the first engineer in my entire family history and first person to board an airplane to come to the United States. And my entry into engineering had nothing to do with an earlier romance with some chemistry sets or robotic kits or whatever. It was very much like an arranged marriage. Everyone around me was studying engineering so I thought that was the way to go and I just fell in it and it was only years later I fell in love with engineering.
Margonelli: How did you fall in love?
Madhavan: It was during my training as an engineer. I was specializing in instrumentation and control system, so hardcore Newtonian at heart, so efficiency was my goal in everything that I did. And it was through an array of training experiences that I got as a junior engineer in many different areas: medical products and fertilizer factories, power generation, confectionery factories.
Margonelli: Wait, you worked in a candy plant?
Madhavan: Yes, I did work in a candy factory working on the coffee production line, specifically working on controlling the temperature of a cooker that blended together all the ingredients and made sure that the film, the chocolate film, that came out had a specific consistency and texture and all the properties associated with it that ultimately made the candy and the toffee enjoyable.
Margonelli: That is really a hidden engineering job because I think, when you put chocolate into your mouth, you might think that the characteristics that you’re putting in are inherent to the chocolate or inherent to the brand or something like that but it’s actually an engineer who made sure that the chocolate was whipped to the right consistency at the right temperature so that it melts on your tongue.
Madhavan: And my job was to bring the chocolate out of the wrapper. And so, that’s in a way, what I’m trying to do with engineering. What’s underneath the glossy wrapper here.
Margonelli: So, I distracted you a little bit. You were talking about you were working in control systems and this was how you came to fall in love with engineering. So, how was it that you went from these Newtonian control systems to love?
Madhavan: The order and the rigor and the pursuit of precision and perfection was inspiring for me and I can say this only years later because such experiences involved deeply technical mathematics. And in fact, I wasn’t a particularly good student in control theory. I barely passed it in my finals. However, that experience motivated me to work harder, to improve my worldview as a controlled systems engineer which now I think has become a primary aspect of my professionalism as a systems engineer. But control theory only gets you so far. It works extremely well in a boundable environment but, of course, the world we live in is very different. And my transformation as a systems engineer from a controlled systems engineer to, let’s say, a complex systems engineer happened in the United States.
So, after I finished my undergraduate degree in India, I had multiple offers to work in software companies, however, I reserved my passions to pursue medical device development and the state university of New York gave me a wonderful scholarship and that flew me to New York City, went to Stony Brook University where I got my master’s in biomedical engineering. And after that I worked in the medical device industry in the Bay Area and that reinforced some additional fundamentals that weren’t evident in the earlier professional experiences I gained as an engineer. And those concepts deeply relate to accountability, conducting safety critical work that have enormous consequences from individual health to population health.
That showcased the true and unforgiving nature of engineering.
There are aspects of control theory, the well-ordered phenomena, processes made sense even in the medical device industry. You just cannot mess around. These are extremely consequential technologies. And if I moved my microwave analyzer three inches from the desired location, I had to get it recertified again and the paperwork was enormous. And this is a deeply conservative industry yet you have to work at the boundaries of innovation. So, there was this interesting paradox coming up while maintaining astonishing amounts of accountability.
So, the stuff that I signed off, when it left the clean room, oftentimes went straight into the patient. That showcased the true and unforgiving nature of engineering. And you can see the story is building up here where I’m trained in ordered, tame systems and now we are getting to the levels of broader effects of engineering. But ultimately it all made sense when one thing led to another and my path led me to Binghamton, another campus in the state university of New York system. And there we didn’t have a medical school, instead a nursing school, and that enabled me to think about the concepts of care and how do you design and maintain systems for durability and I think nurses and physicians have a completely different worldview on that subject.
Margonelli: Stop for a second and tell me how nurses and doctors have a completely different perspective on that subject.
Madhavan: I had enormous fun when I was in Binghamton because, as it turned out, I was not inhibited in any form. I basically took courses in liberal arts. I took courses in evolutionary biology and nursing and so forth so I really got broadened in perspective as a systems engineer at Binghamton. But I think the real positive effects happened when I was in an independent study with a nurse who was trained as a mechanical engineer and the perspective started to blend there. How do you design systems for long-term value? And you don’t build a bridge for three months, you build it for at least 30 years and I think the nurses that I was working with who were doing work in rural health or preventive diagnostics and so forth really taught me a great deal in how do we bring in the preventive philosophy into the world of engineering.
I found myself gravitating toward this unseen elements of engineering that undergird not just our civil life, but our civilizational life.
So, this ties in well with the pieces that we are going to be discussing. Maintenance is not just about fixing, it’s about foreseeing, forecasting and I think nurses have a really good instinct about that which was quite different than the surgeons I worked with. They had a volume to fulfill and they had, of course, precise targets to address and there are aspects of that that is deeply resonant with engineering as well. So, there are aspects of engineering that are applicable in both realms, one looking at the long-term, the invisible thankless work versus the heroic godlike accomplishment that you see in an operating theater and so forth. So, I was able to pick up those threads simultaneously. Yet, increasingly, I found myself gravitating toward this unseen elements of engineering that undergird not just our civil life, but our civilizational life.
And I think engineers, much like nurses, are professional caretakers and care providers in that regard. Sounds very philosophical here but, now, how do you articulate that in a way that is immediately appreciable which is not always the case because people love the novel, nifty, flashy stuff compared to the kinds of things I’ve gotten interested in over the years.
Margonelli: So, I want to just stop here and just switch gears a little bit to talk about how the struggle that engineering has between being flashy and getting the attention and doing the work which is something that you write about a lot. So, one of the pieces that you wrote is about Elisha Graves Otis and the piece starts in 1853 at a World’s Fair. Elisha Graves Otis has designed a way to keep elevators from falling dramatically which lots of people at the time, there were elevators, but if they lost control of the cable or something, they would fall and people were crushed. So, he’s having a really hard time getting people to pay attention to his brake so he climbs up 50 feet in the air at the World’s Fair and he has his assistant cut the cable with an axe so that everyone can see the demonstration of his brake. So, tell me, first of all, what did that then kick off? What did that enable? That flashy showmanship?
We live in a period where “demo or die” has become as important as “publish or perish” in some fields.
Madhavan: He almost killed himself to get his technology accepted as we have seen again and again, time and again, in the history of technological innovation. Just because technology is meritorious doesn’t mean it’s publicly accepted. So, this is just a continuing theme in the history of engineering. So, Otis finds himself in a similar situation and in a very entertaining episode that you just beautifully described. He comes up with this dramatic demo in front of people and they’re just gasping, they’re thinking that this was an act of lunacy, even suicide, to demonstrate the safety brake in the elevator system. That was extremely crucial because even the elevators, as a technology, were useless without the safety brake and that’s what Otis was trying to demonstrate. And of course, the inspiration, P.T. Barnum comes into the picture, he commissions Otis to do this act, and that takes us into the whole substance of showmanship and its value.
We live in a period where “demo or die” has become as important as “publish or perish” in some fields. So, now we are getting to the first tension… I think the trilogy as such is a work of tensions and I think we just need to look at the tension here between showmanship and substance and technical excellence alone doesn’t guarantee adoption and I think that’s what Otis was trying to do. But importantly, the Barnum’s philosophy of showmanship here is you have to create a meaningful connection to purpose and the people who would eventually use those technology. So, that’s why this balance between flash and function is crucial but it’s not often discussed in engineering. And that’s what I was trying to do with Otis’s story to explicate the innovation and maintenance dynamic where innovation gets the spotlight, maintenance stays invisible. Again, the visibility, invisibility trade-offs that we have. Both are essential for progress.
Margonelli: I think what’s interesting about the Otis story is that we wouldn’t have any skyscrapers in New York, nothing would look the way it does today, if we hadn’t had that invention of the elevator brake. But it wasn’t just the invention of the elevator brake, it was that he had to create a story or a narrative for people, as you talk about with P.T. Barnum, that got them to accept it, to want a brake, because what they wanted was other things. They didn’t really want to think about the safety.
How do we position, promote, promulgate maintenance as a source of progress? It is important to note that maintenance is not just about static preservation, it is a dynamic enabler of innovation.
Madhavan: I think there are episodes in engineering that also give a good example or demonstration of how we need to marry rhetoric to mobilize technological achievement or adoption here. Edison did that with his electric display, Steve Jobs famously in his turtleneck theatrics and, of course, Kennedy’s moon speech just was a rousing example of such that. But can we use the potential to motivate action through showmanship but for promoting maintenance and infrastructure upkeep? So, the piece really centers on not the sexy but the vexy and how do we go about thinking through that. And also, importantly, how do we position, promote, promulgate maintenance as a source of progress? It is important to note that maintenance is not just about static preservation, it is a dynamic enabler of innovation. Without maintenance, you’re basically going to have a collapse of everyday expectations.
Monday morning is going to look very different if you don’t have maintenance and I think it makes life better and it sustains, shall we say, the necessary continuities of history and I think that’s where the maintainers are in a much more crucial position than innovators here. So, how do we rebalance that cultural attention between innovation and maintenance and how do we suggest new or even old forms of successful showmanship that’s typically reserved for new technologies to highlight the crucial importance of the unthanked care work.
Margonelli: So, in a sense, you are the P.T. Barnum of maintenance. In the sense that you are telling these stories and trying to bring them to our attention.
We celebrate Ferris wheels but it’s the suitcase wheels that have actually truly changed how we move through this world.
Madhavan: Well, Barnum was a colorful character. So, in researching for this piece, I went into his life history. The Otis story on its own didn’t make too much sense for me, it was not completing. But then, when I started reading Barnum’s piece, that’s when the connection became apparent. Otis and Barnum really came together. But it’s the idea of glamour, right? We celebrate Ferris wheels but it’s the suitcase wheels that have actually truly changed how we move through this world. Now, suitcase wheels are not Instagram ready photos but, without suitcase wheels, none of those Instagram photos exist, quite frankly.
Margonelli: Well, I think some of us who are a little bit older can remember when suitcase wheels were terrible and they would always get stuck and then they’d replace them at some point with more efficient wheels that made running through the airport better. I want to back up a little bit and just talk about magnificent acts of maintenance which is the focus of one of your other stories. So, there was a bridge that was built in the 1920s, I think, in New York, the Bayonne Bridge. And when it was released, it was a big show. It’s the kind of engineering that gets a lot of attention. It was beautiful, it was soaring, but then come the ’80s and it needed to be hiked up so that the really big ships could get underneath it. It needed to be entirely remodeled. This is going back to this idea of the care work that you’re talking about with nurses of anticipating all the things that can happen. So, tell me about what went into this and then how this became a hidden wonder of the world.
Madhavan: Yeah. We called our second piece The Grind Challenges which is to illustrate, again, a tension between the costly, complex Grand Challenges and how they are not attainable if you don’t consider the numerous constraints on in the ground that enable them in the first place. So, I just changed one letter, one vowel and it completely changes the entire meaning of the Grand Challenges framework. So, Grind Challenges which is, again, relates to the unseen, undiscussed work that is seldom a subject of mainstream conversation. They only show the splashy result but not the work that precedes it. So, how do we think about upgrading existing infrastructure? And the Bayonne Bridge offered me a great case study. It also introduced me to profound work of another forgotten engineer, Othmar Ammann, who built numerous bridges which is almost inhuman achievement.
So, the fundamental point there was to emphasize how Grind Challenges are crucial for ensuring the functionality and safety of our deeply engineered environments. So, the contrast between the Grand and the Grind engineering, I think, deserves a little bit of attention here. The Grand Challenges are the bold frontier pushing achievements that capture public imagination but Grind Challenges, as I mentioned, are the essential upgrades and detailed work that keeps the world functioning.
Margonelli: When it came time to remodel the Bayonne Bridge, the ultimate cost was $1.3 billion, a huge, huge undertaking, far larger than the first bridge. Tell me what the Grind Challenges were that were involved in pulling this off.
Madhavan: Here’s a little bit of a historical context. So, the Bayonne Bridge was built by Othmar Ammann, the distinguished engineer, and it was opened in 1931 over the Kill Van Kull strait and it was the world’s longest through arch bridge at that time connecting New Jersey and Staten Island. But it was in the early 2010s when the Panama Canal was widened and that enabled larger ships, what’s called the Post-Panamax ships, to go through that. Those ships couldn’t fit under the Bayonne Bridge’s 151-foot vertical clearance which means the bridge had to be modified for the bigger ships to fit in and serve the ports and so forth. So, the Port Authority of New York and New Jersey had three options.
First was remove the bridge and replace with a tunnel and that was going to be about $3 billion or something over the next 15 years. Second, they could retain the original arch but rebuild it with the higher cable state model. So, also an expensive option but the result would be a very awkward looking bridge. So, the third option was the most complicated, and that was what was worked out, which was to install a new road deck closer to the apex of the bridge. And it was clear that it would compromise the bridge’s beauty but it would provide the necessary vertical clearance that the ships needed. I think it was about 215-foot or so.
Unlike the Grand Challenges that pushes to new frontiers, the Grind Challenges are about the myriad, fine-tuned interlocked tasks that keep our world functioning.
So, how do you raise a roadway on a bridge while keeping it open to traffic? This is real life engineering, these are the kinds of challenges engineers do day in and day out. So, someone naturally compared it to doing an open heart surgery on a running patient, that was the basic gist here. But building a bridge within a bridge required, as you can imagine, safe transit and crane operations and the project also needed to minimize disruption to the local resident life which Ammann didn’t have back in the 1930s. In fact, the project was delivered under budget, ahead of schedule and he didn’t even have to go through the environmental reviews or seismic requirements back then. But for the modification project, I think over 300 organizations were consulted and they all have to sign off on this. And the environmental review was 5,000 pages, cost $5 million to produce that alone, then came another sensitive aspect that needed to be addressed. The strait was actually a possible nesting location for some rare species, the peregrine falcon. So, all of these considerations needed to really go in.
But of course, add to this, you have the classic issues of NIMBY, not in my backyard opposition, or BANANA engineering, build absolutely nothing anywhere near anything. And I think these are valid social pressures but they affect in the way engineering needs to be thought through. So, I think this re-engineering project illustrates something utterly common to all projects involving infrastructure maintenance and I think that’s why that concept of Grind Challenges makes enormous sense. Because unlike the Grand Challenges that pushes to new frontiers, the Grind Challenges are about the myriad, fine-tuned interlocked tasks that keep our world functioning.
Margonelli: Yeah, I wanted to interrupt you here and say that, actually, while that redoing or some redoing of the Bayonne Bridge was going on, I actually was riding on a tugboat that was pushing a barge full of oil underneath the Bayonne Bridge and we radioed ahead and told them to stop their hot work which they were welding on the bridge and they didn’t want sparks to fall down on top of us in our tugboat with the barge full of oil. And that is just one of these tiny interactions every day that is part of these engineering grind challenges of making sure that you don’t drop a spark from your engineering project.
(Note: Lisa misremembered! The repairs were occurring on the Goethals Bridge, not the Bayonne. Here’s the whole story of “A Dangerous Move” from the New York Times.)
Madhavan: As a sidebar, this is for your first book? Your oil book?
Margonelli: Yes. Yes, yes, yes. (laughs) But actually, I thought it was so interesting. It woke me up to the levels of complexity in engineering in a big city. I wanted to return to a quote from your article that I just thought was really beautiful and encapsulated a lot of what you were getting at which is: “All the grind challenges associated with care and conservation are at the core of the bargain between engineering and society—they distill the essence of accountability, values, and humility into professional practice and ethics.”
And it’s a really interesting view and, to go back to what you were talking about earlier. In some way, it shows engineers as nurses, the nurses for complexity in a way.
Madhavan: That’s well put. There’s a related story here which is about addressing these Grind Challenges which is a practical philosophy that Ammann favored. I mean he wanted to build an efficient bridge. But it also relates to his conflict with his mentor, Gustav Lindenthal, who favored more monumental grand visions. So, the difference between a practical and the monumental approaches, and I think that’s… I am of the practical side, most engineers are that way and I think we need to recognize that. Why do we need a rebranding exercise to say engineering is altogether another thing when it is actually doing far more profound work? And I think we need to recognize that with daily diligence and duty rather than say that the engineering is about disruptive innovation when very few engineers are actually involved in that.
And if you look at the value engineering provides to society, and there was a 2019 paper on the subject which beautifully captures it all, innovation through research and development is just one of the 14 things engineers are engaged in in service to society. Yet that becomes a dominant theme in how we articulate and promote engineering in society. Of course, it’s exciting, we are all captured by that, captivated by that. But if we are not showcasing what true engineering is and how it should work and how it does work, then we are not doing the work properly and I think there are the aspects of showmanship that need to be applied to these dull, diligent work that keeps the world running.
The current incentive systems often driven by capitalism and corporate priorities don’t properly value this work.
Margonelli: So, in a sense, at the core of the New York trilogy is a vision of how we kind of “P.T. Barnum” the care aspects of engineering and use that as a way to affirm engineering’s responsibilities to society.
Madhavan: Correct. And unfortunately, the current incentive systems often driven by capitalism and corporate priorities don’t properly value this work. Even companies that pride themselves as disruptive innovators and everything invest a vast fraction, about 60% of their investments goes into maintenance to enable their disruptive tendencies even.
Margonelli: So, the third in the trilogy really looks at a guy named Egbert Viele and he came to New York and he made this amazing map. Tell me about Viele’s map.
Madhavan: So, Egbert “vee-el-y” or “veel”, I don’t know how to pronounce it so let’s just go with the Viele (“veel”) for this purpose here. A 19th century engineer who played a significant role in shaping New York City’s infrastructure. Again, had not encountered him before this piece. And I think that’s another feature of the three pieces because I knew nothing about the individuals I ended up writing about and pulling out the deeper messages from them. But despite his contributions, Viele is largely unknown outside of urban planning circles.
Margonelli: You know, everybody knows his archrival.
Madhavan: Frederick Law Olmsted, yes. The difference there was Viele was a strong advocate for sanitation and public health, again, the boring parts of society and engineering. And he was appointed as Central Park’s first chief engineer and he sketched out designs for the city in a brilliant way by emphasizing drainage and practicality with the goal of creating a healthy and functional city which is deliverable and maintainable under budget. Now, that showcases the classic engineering problem. However, Olmsted, the celebrity here with his political connections and charisma, eventually took over the project and he ended up advancing a more picturesque, a naturalistic park system which eventually prevailed and we now enjoy. But a lot of people argue and my piece argues that Viele’s focus on sanitation and drainage actually laid the foundation for a healthy and functional park and the city that contains it.
So, here comes another tension. You got Viele’s practical and forward thinking but utterly boring, unsexy approach and then you got Olmsted’s idealistic, aesthetically driven vision here. Now, how do you consolidate those viewpoints? But it was Viele’s contribution, although less celebrated, was ultimately more impactful for the long-term development of New York City that we now take for granted. So, I wanted to pull out this point of how do we go about recognizing and valuing the work of engineers like Viele whose contributions to public health and welfare are overlooked time and again. So, again, the tension carries across these pieces.
Margonelli: Yeah. So, I just wanted to mention about Viele, he created this map of every little swamp and stream and pigsty all across the island of Manhattan and people still use it. When people have a damp cellar in New York, they still go and find this map that I guess is as long as a Buick—back when Buicks were long—and you can find where the streams might be running through your own basement. So, people are still consulting it and it’s an interesting thing because he created a document that enabled the city to grow, whereas, Olmsted stopped time in Central Park. Whenever you enter Central Park, you’re in the time of Olmsted and those visions and you’re in the bucolic sheep meadow.
Madhavan: Viele understood the connection between topography, drainage and public health. And now, that was revolutionary for that period and that enabled them to emphasize practical solutions, unlike Olmsted, to drive urban health improvements and also setting the foundation for a dynamic, evolving, dense city that could enable modern infrastructure which is great, which is better. These are political questions that we need to wrestle with in engineering and in society so, of course we get down to, again, the politics of recognition. And as you noted, the Viele’s Map is still actively used by New York’s urban planners and engineers. And I think, in a way, it was far more comprehensive as an urban planning tool than Olmsted’s vision for a bucolic scenery in the middle of Manhattan. And I think the modern New York that we can now experience, we can engage with more closely resembles Viele’s version than Olmsted’s.
If we want a better world, we actually need to change some of those stories that we tell about engineers, about the world around us, about how we envision this better world and what kind of care we need for it.
Margonelli: Yeah. I think that’s interesting. And then, as you talk about the politics of prestige, that’s also a really important question for those of us who tell stories about engineering, for journalists, for people who run academic programs, for everyone to reconsider. I think one of the things that you wrote that I found impressive and interesting was, in engineering, the status accrued by high prestige may end up depriving society of visionaries who see the potential in building sewers and draining swamps to create better lives for all. And that, if we want a better world, we actually need to change some of those stories that we tell about engineers, about the world around us, about how we envision this better world and what kind of care we need for it.
Madhavan: Not only was Egbert Viele’s map was colorful, his personality was colorful too. He was bitter and his rivalry with Olmsted was just evident. It’s a fascinating story that made itself available to me. And so, Viele just was unhappy, I think, most of his life that he didn’t get the recognition that he deserved. So, what does he do? He builds himself an elaborate tomb, the largest in the West Point Cemetery to make his presence known and what a character. And I think the desire and the necessity to be recognized in society is such a human thing and I think this story brings it out clearly in an engineering sense.
Margonelli: If this conversation has inspired you to learn more about the underappreciated elements of engineering, check out our show notes to find links to Guru Madhavan’s New York Trilogy and more of his work.
Please subscribe to The Ongoing Transformation wherever you get your podcasts and write to us at podcast@issues.org. Thanks to our podcast producer, Kimberly Quach and our audio engineer Shannon Lynch. I’m Lisa Margonelli, editor in chief at Issues. Join us on December 3rd for an interview with Natalie Aviles about scientific breakthroughs at the National Cancer Institute and how bureaucratic organizations can nurture innovation.
A Vision for Centering Workers in Technology Development
Illustration by Shonagh Rae.
Artificial intelligence has been hyped in the media and in workplaces, especially since OpenAI released ChatGPT in November 2022. In the workplace, AI systems can be used for hiring, disciplining, and firing workers, as well as task assignments, scheduling, and evaluations. AI can also accelerate existing trends in automation and robotics. A 2023 US Census Bureau analysis of 2019 data showed that nearly 30% of all workers, and more than 50% of manufacturing workers, are exposed to technologies that automate tasks. This level of exposure has fed widespread anxiety: 70% of workers are worried about job displacement by AI, according to 2023 polling released by the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), a federation of 60 unions representing more than 12.5 million working people.
New technology can solve problems or worsen them. Choices around how technology is used determine whether it augments or disrupts workplaces and other social arrangements. AI, machine learning, robotics, and automation in the workplace can exacerbate inequality at the office or the factory and widen economic and political disparities in society more broadly. Technological innovation is shaped by the incentives and policy choices of the public research and development ecosystem. Workers should have a say in the choices that determine how technology affects their workplaces—and the world.
The public research and development system that drives technological innovation has so far failed to include a crucial group of experts and end users: the workers.
The public research and development system that drives technological innovation has so far failed to include a crucial group of experts and end users: the workers. Leaving these important stakeholders out of the innovation process can harm workers, increase inequality, widen economic disparities, and inhibit development and acceptance of technology. Workers’ voices are a crucial resource for making innovative technologies trusted and effective, so their full benefits can be realized for society. And, of course, workers are taxpayers who underwrite the nation’s investments in research and development of new technologies.
Publicly funded R&D of AI and automation must include partnerships with workers. Worker-centered R&D, an approach that consults workers about their needs and experiences through their unions, can give labor a voice in technological design, development, and deployment. Workers are experts in what they do and what they need, and their perspectives could improve AI technology in many dimensions, from efficiency to equity to safety to worker well-being. Unions are involved in partnerships across multiple sectors that could fundamentally change the trajectory of AI technology. Properly scaled, these strategies are key to ensuring that the benefits of technology are broadly shared and outweigh the harms of innovation. Smart labor-management partnerships and changes in public research investment policies can build upon this momentum.
Considering Worker Needs
Taxpayer-funded R&D is essential to the innovation ecosystem. This funding, an estimated $200 billion in 2022—including $1.75 billion on nondefense AI R&D—takes many forms, including grants, loans, government contracts, postdoctoral fellowships, and public-private partnerships. However, even though workers underwrite the system with their tax dollars, they are often left out of the research it funds. Failing to center the process and outcomes of innovation on the people who fund it can have negative consequences.
History has shown that careless implementation of automation can eliminate jobs and de-skill occupations. For example, hospitals have used remote monitoring equipment to replace skilled nurses with less-skilled workers. Implementing technology without worker input can also reduce autonomy and job satisfaction. A study of industrial robots across 14 industries and spanning 16 years found that workers who had a degree of control over robots expressed a greater sense of agency and competence in their jobs and felt more connection to their coworkers than those with less operational roles. Furthermore, implementing workplace technology without centering workers can erode occupational health and safety, disempower and undermine the economic stability of workers, exacerbate economic and racial inequality, and even place the public at risk.
Technological advancements in AI without appropriate policy guardrails could lead to significant job loss, much as misguided free trade policies have over the past 40 years. But progress and innovation don’t have to be at the expense of workers or their communities, accruing benefits to a narrow portion of society. Instead, a new path can be forged in which shifts and disruptions are navigated more inclusively, and the benefits enjoyed more broadly.
Involving labor unions in research and development can also help to ensure that society at large benefits from technological shifts and disruptions.
History shows that involving workers can positively shape how technological change unfolds in an industry. The telephone industry is a prime example. In the 1940s, telephone operators handwrote the billing records for calls. However, when telephone companies introduced automatic billing using punch cards, telephone operators were not displaced. Unions and the industry trained workers to operate new technology—early computers—allowing them to benefit from technological shifts in the workplace.
Involving labor unions in research and development can also help to ensure that society at large benefits from technological shifts and disruptions. In the process of early electrification, unions were instrumental in making electrical infrastructure significantly safer and more efficient. In particular, the International Brotherhood of Electrical Workers played a significant role in shaping electrification policies and standards in the Tennessee Valley Authority.
Benefits of Worker-Centered R&D
Although workers are often best positioned to identify potential risks, practical limitations, and unintended consequences of a technology in real workplace settings, today a majority of workers do not have as much say as they’d like in new technology on the job, leading to what has been called a “voice gap.” However, tested methods are available for including workers’ perspectives during AI development and implementation.
One path is through partnerships between university researchers and unions. This type of collaborative research seeks to better understand how emerging technologies can improve efficiency and job quality at the same time. For example, a collaboration between Carnegie Mellon University, the AFL-CIO Tech Institute, the Transport Workers Union, and the Amalgamated Transit Union is studying bus driving and automation. Although autonomous shuttle and robotaxi companies have announced ambitious plans to replace human operators, today these systems still rely on remote human intervention. From 2021 to 2023, in the United States and Canada, bus drivers shared their expertise with policy and engineering professors. These drivers stressed that operating buses requires adapting to quickly changing circumstances on the streets—including traffic, snow and flooding, and construction—and navigating when the connection to GPS data is lost. More importantly, a bus driver’s job involves working with people: helping riders with medical and safety emergencies, assisting older passengers, and giving directions. To ensure accessibility and comply with the Americans with Disabilities Act, bus drivers also often help passengers with disabilities. The resulting 2022 white paper concludes that even if autonomous transportation is technically feasible, it cannot replace human drivers’ many roles in complex passenger transportation systems. This research is being used to design new strategies for safety, emergency response, and social support on buses.
Missed opportunities to incorporate consultations with workers in the development of safety protocols for automated vehicles show the cost of inaction.
Missed opportunities to incorporate consultations with workers in the development of safety protocols for automated vehicles show the cost of inaction. There have been examples of autonomous robotaxis disrupting traffic, blocking first responders, and even dragging a pedestrian. Had passenger transportation service workers been involved in evaluating the feasibility of deploying fully autonomous vehicles, these highly trained professionals could have warned against substituting the judgment of humans with robotic vehicles that lack the capacity to respond safely in the complicated, often unpredictable operating environment of the transportation system.
Worker input is also productive in the many industries that use algorithms to manage or assign workers’ tasks. For example, large chains in the hospitality industry often use algorithmic management platforms to coordinate housekeeping services by directing workers and supervisors. In an ongoing study, researchers from Carnegie Mellon, several other universities, and UNITE HERE, an international labor union affiliated with the AFL-CIO, are looking at the effect of algorithmic management on employees’ tasks, relationships, and well-being.
These algorithmic management programs routinely assign cleaning to workers according to guest priority. However, this research revealed that the app did not consider the distance traveled between rooms or the physical effort required for different cleaning tasks, which meant its assignments could make workers less efficient. Hotel cleaners and housekeepers often push a 200-pound cart across massive buildings and do heavy physical labor. Rather than allowing workers to determine the best flow of work, the algorithm made the work more difficult while also diminishing the workers’ autonomy and agency.
Employee feedback via labor unions led many hotels to reconsider workers’ agency in working with their room assignments. But the study further engaged workers in prototyping sessions to imagine ideal versions of the technology that could improve transparency, workload, and ultimately worker well-being. Participants in the sessions suggested better mechanisms for communicating with supervisors and managing workload, including design features to prevent management from over-assigning workers.
Incorporating worker perspectives and expertise in the development of AI technology centers workers as key contributors to change instead of as passive recipients.
Another promising method of incorporating worker-centered R&D is through partnerships with technology companies. In 2023, the AFL-CIO and Microsoft signed a partnership agreement to incorporate worker voices in AI technology development. This labor-tech partnership will foster open dialogue about how AI can anticipate workers’ needs and include their voices in its development and implementation. Additionally, the agreement includes a neutrality framework confirming “a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.”
Incorporating worker perspectives and expertise in the development of AI technology centers workers as key contributors to change instead of as passive recipients.
Supporting Worker-Centered R&D
As AI technology evolves, it is essential that workers’ perspectives are incorporated into publicly funded AI research infrastructure, grantmaking, and datasets, including research on how AI technology is being used in the workplace and its impact on workers.
First, public R&D investments in technological innovation should require grantees to partner with labor. The federal government has already taken the first steps along this path. The 2022 CHIPS and Science Act requires the National Science Foundation (NSF) to ensure that its programs incorporate workers’ perspectives by partnering with labor organizations. In addition, the Economic Development Administration’s Regional Technology and Innovation Hub Program requires involvement of labor or workforce training organizations, so that workers will be at the table where decisions are made about how to spend $504 million for regional innovation.
This September, a major advance built on these successes. NSF, the AFL-CIO, and the AFL-CIO Tech Institute signed a Memorandum of Understanding committing NSF to collaborate and engage with workers and their unions in emerging technology areas. The memorandum provides a structure for sharing information about NSF’s funding priorities, as well as coordinating programs, outreach, and science and engineering education. This commitment recognizes workers and their unions as major stakeholders in federal innovation R&D programs. More agencies funding AI R&D should follow this example.
Second, science funders should incentivize companies to work collaboratively with labor organizations and workers. For example, research grants could promote capacity building for university-labor relationships, much like the one with Carnegie Mellon and the AFL-CIO. Such partnerships could be accomplished by allowing grant funding to be used for capacity-building expenses to strengthen systems, processes, administration, and operations.
Creating a policy ecosystem that empowers workers and supports more unions is essential for weathering this era of technological change.
Third, better data are needed to identify and prioritize which workers are most affected by emerging technologies. Currently, companies are not required to publicly report the demographics or job classifications of their workers. As a result, policymakers do not have access to data about demand for occupations and tasks via new hires and job vacancies, types of technology in the workplace, and the tasks they are used for. Better data on the job market at the regional level are also needed. The Bureau of Labor Statistics should produce regular updates of its July 2022 report on growth trends for selected occupations considered at risk of disruptions from AI, automation, robotics, and other emerging technologies. Furthermore, revisions to survey methodology, expansion of existing surveys, updated software tools, and creation of a job task classification system could help provide a deeper and more accurate understanding of the impact of technologies across jobs and sectors.
More broadly, policymakers should pass legislation that protects and expands workers’ rights to collectively bargain. Creating a policy ecosystem that empowers workers and supports more unions is essential for weathering this era of technological change; such measures ensure workers can effectively advocate for and participate in shaping the technology that becomes part of daily life. As policymakers look to the future, they may be guided by the knowledge that technologies are shaped by who is included and empowered in the development process, by the stories people tell about the past, and by what they can envision for the future. Better outcomes are possible by including workers in designing, developing, and deploying technology. Worker-centered R&D holds incredible promise—with the right policy interventions to support it.
Tribal Health Equity Requires Tribal Data Equity
When the pandemic started, I was ready to jump in. American Indians and Alaska Natives were being diagnosed with COVID-19 at rates 3.5 times higher than non-Hispanic white persons. As a medical doctor with a master’s degree in public health and epidemiology, I had studied and seen how to mitigate infectious disease and save lives. And as a tribal member with deep ties to South Dakota, I knew that implementing these proven public health measures would be challenging in our rural and medically underserved state. In September 2020, I received the chance of a lifetime and joined the Great Plains Tribal Epidemiology Center (GPTEC) to fight the pandemic in tribal communities in Iowa, Nebraska, North Dakota, and South Dakota. It was immediately clear that we needed to be aggressive in applying the toolset of epidemiology to disease prevention.
Take the classic public health practice of contact tracing in infectious diseases. Before this measure could save lives, GPTEC and our member tribal nations needed to know who had been diagnosed as infected—we needed data. Without this basic information, there was no way to make sure infected people and their contacts were isolating rather than spreading disease to more people. We couldn’t even accurately follow tribal infection rates, which meant we lacked evidence needed to inform recommendations on mask use or school closures. By 2022, the devastating and disproportionate impact of COVID-19 on American Indian and Alaska Native (AI/AN) communities contributed to a 6.6-year drop in life expectancy from 2019 to 2021, leading to an average life expectancy for AI/AN people of 65.2 years—barely old enough to qualify for Medicare.
An effective COVID-19 response required access to data. In April 2020, the US Department of Health and Human Services launched HHS Protect, creating “a central source of data for the COVID-19 response … to inform operations and decision-making.” However, data were not equally available or accessible. On January 21, 2021, his first full day in office, President Joe Biden issued Executive Order 13995, “Ensuring an Equitable Pandemic Response and Recovery.” It recognized that “the lack of complete data … on COVID-19 infection, hospitalization, and mortality rates … has further hampered efforts to ensure an equitable pandemic response.” That same month, the Government Accountability Office (GAO) began a performance audit that would find harmful gaps in Tribal Epidemiology Centers’ (TECs’) access to data, including COVID-19 data.
By 2022, the devastating and disproportionate impact of COVID-19 on American Indian and Alaska Native (AI/AN) communities contributed to a 6.6-year drop in life expectancy from 2019 to 2021.
Established by Congress in 1992, the nation’s dozen TECs are charged with monitoring and analyzing health data of AI/ANs and reducing glaring health disparities. In 2010, Congress reiterated this role, clearly stating that TECs were public health authorities under the Health Insurance Portability and Accountability Act, or HIPAA, and so, like other public health agencies, legal recipients of health information. However, having rights to the data has not meant that the TECs get them. “While TECs had access to some epidemiological data,” the March 2022 GAO report states, “officials from all 12 TECs we interviewed described challenges accessing other data from CDC [Centers for Disease Control and Prevention], IHS [Indian Health Service], or states.” Seven of the 12 TECs reported that officials at federal agencies did not seem to recognize their mandate to share data with TECs. The IHS required one TEC to submit a Freedom of Information Act request to receive potentially lifesaving COVID-19 data, treating the center as a member of the general public rather than a public health authority. Such lack of access, TECs reported, could keep them from providing their communities with the information needed to make decisions.
I witnessed the impact of this inequitable access daily in my work at GPTEC. For the first two years of the pandemic, our team spent two to four hours every day collecting publicly available data from four state websites on the 311 counties in our jurisdiction. The work of copying and pasting was so time-consuming it required a dedicated new position. Despite all of that effort, the data available were inadequate. Race was often unspecified, preventing us from monitoring the COVID-19 rate in our AI/AN population. We also could not access identifiable information, such as names and addresses, for COVID-19 cases—information needed to assist tribal nations in essential contact tracing. Despite these limitations, we used available data to create a tribal COVID-19 dashboard and tribal-specific reports, often the only detailed information that TECs received to help make evidence-informed policy decisions. During the course of the pandemic, we gave advice on when to close businesses (and open them) and on when and where to wear masks, as well as on more fraught topics like closing reservations.
This is the role of public health: to monitor and help respond to threats like infectious disease and other causes of injury and ill health. The lack of data hinders our response in normal times and in public health emergencies. Resources that could be devoted to contact tracing or disease surveillance are drained away for the simple purpose of gaining access to data.
Many factors, including poverty, access to care, and geographic isolation, contribute to health disparities in AI/AN communities, but the lack of data thwarts any work to achieve health equity. For the past three years, my team has been struggling to manage a regional outbreak of syphilis. The South Dakota Department of Health reported 1,504 cases of syphilis in 2022, up from 56 in 2019. Though often asymptomatic, this sexually transmitted infection (STI) can lead to serious health problems. In pregnant individuals, it can cause stillbirths, miscarriages, low birth weight, and deformities. Syphilis can be cured with penicillin, but first we must find the people to treat.
Though state officials had identified the rise in syphilis cases in 2021, GPTEC and the tribal nations did not have access to the data, and thus could not launch a full response. Although we held community testing events and provided health education, we could not conduct contact tracing or provide tribal-specific syphilis reports. Our efforts to hammer out a data-sharing agreement with South Dakota’s Department of Health took years, finally succeeding in March 2024, even though it was exactly the kind of exchange that Congress intended to facilitate by declaring TECs public health authorities.
The lack of data hinders our response in normal times and in public health emergencies. Resources that could be devoted to contact tracing or disease surveillance are drained away for the simple purpose of gaining access to data.
Although this agreement represents a historic advancement for tribal public health in South Dakota, work still needs to be done. Tribes in our region outside of South Dakota do not regularly receive data from their state health departments. Even the South Dakota agreement is limited to certain types of data, requiring separate, one-off, and complex discussions about vaccination and vital records. At the federal level, our requests to the IHS for syphilis and other STI data remain unfulfilled, though it regularly reports this information to state health departments. We continue to work with our state and federal partners to get the needed data, but the delays cause real harm.
Yes, TECs are requesting sensitive health information (formally classed as protected health information) such as substance use and STI status along with identifying information like names and addresses. Such information is to be released only when necessary and as laws allow. Officials do have an obligation to protect the privacy and security of the data; when confused about what the laws allow, they tend to err on the side of caution and limit data release. But this overcaution can cause harm.
In instances in which TECs have been able to access timely, robust, and specific data, they have achieved significant results. One TEC created an “injury atlas” analyzing causes of death and hospitalizations along with recommendations for prevention; others have created interactive dashboards on maternal and child health and other subjects. In April 2024, GPTEC and three South Dakota tribal nations brought CDC officials to our region to conduct a joint response to the syphilis and congenital syphilis epidemics. Using the data provided by South Dakota’s health department, tribally led public health teams (which included tribal staff, federal officials, and GPTEC staff) located and interviewed dozens of people with syphilis plus contacts and provided treatment to 62 individuals. That included six pregnant people, preventing potentially deadly congenital syphilis cases. This successful project all happened with just eight days of field work—and tribal access to appropriate data.
Federal or state agencies could not have achieved this result on their own. Tribal communities in our region are very rural; remote homes often lack conventional addresses and can be difficult to locate. Tribal public health workers have a deep knowledge of their communities that federal and state employees may lack. Tribes also have a broader conception of wellness and generally will provide more services than states, offering testing, treatment, and even wrap-around services such as isolation support and food. The successful intervention in South Dakota, which took nearly a year to organize, demonstrates what tribal nations can do when they have the right data. (Indeed, our CDC collaborators put together an inspiring presentation defining federal agencies’ roles as facilitating data sharing and building tribal nations’ capacity.) However, tribal nations and TECs are still burdened by the need to make repeated requests for data, each one requiring time- and resource-intensive negotiations for access.
The successful intervention in South Dakota, which took nearly a year to organize, demonstrates what tribal nations can do when they have the right data.
The GAO report found that TECs could ease other agencies’ fears of sharing data by developing “strong relationships” with officials at other agencies and so helping them gain trust in TEC staff’s ability to safely and securely work with the data. But it should not come down to that. Such relationships can take years to build, if they can be built at all, and are inherently unstable. If one individual retires or changes jobs, the entire data-sharing relationship can collapse.
What’s needed are strong affirmations that data should be shared. In January 2024, HHS released its first draft data access policy in response to the GAO report; the agency released a revision on September 3, 2024. The initial draft policy did not contain a clear presumption of access to identifiable HHS data for TECs, instead saying that data should be shared “when feasible or as appropriate.” It also suggested that the only data to be provided to TECs was aggregate data, not “line-level” or identifiable data needed to perform contact tracing and other basic public health services. The more recent draft made significant changes based on tribal consultation discussions. While review and discussions are still ongoing, the current draft makes it clear that TECs are to receive both aggregate and individual data available to other public health authorities, without additional cost or process requirements to request or obtain data beyond what is expected of other public health authorities. This is the minimum standard required to begin to address data access equity. Tribal consultations are scheduled for October 2024. Whatever the specific language of the final policy, to adequately address the needs of TECs and the communities we serve, HHS needs to provide immediate access to these data.
Federal agencies’ failure to share data springs, in part, from an understandable fear of data breaches and privacy violations. GPTEC has industry-standard confidentiality and security measures in place to protect this sensitive information. One GPTEC employee even helped develop and manage the state’s own data system. Unfortunately, continued concerns about TECs’ ability to safely handle data can prevent information sharing and cause harm to community members.
Everyone in the care community shares within the bounds of the law, and the default is cooperation, because that’s what’s needed for the health of the patient. Public health should be no different.
Additionally, for many data holders, the mandate to keep data safe overrides the mandate to share. Workers reason that they are less likely to get in trouble if they stick with the status quo of not sharing. Or else they apply protocols worked out for researchers working with TECs and tribal nations, even though researchers’ access to data is more restricted than what TECs have the right to receive as public health authorities. Also, although it is never explicitly stated, there can be bias against providing information to AI/AN organizations or governments. When TECs lack access to data, real people are harmed. Elders are hospitalized with the flu. Babies die from congenital syphilis. Young people commit suicide. Recognizing the harm data access barriers may lead to, federal agencies should create deliberate policies to enable sharing with tribal entities.
In medicine, data are shared freely between professionals who are working together to treat patients. Doctors do not have to develop years-long relationships with nurses to learn vital signs for a hospitalized patient. Pharmacists do not have to know the doctor who wrote the prescription in order to fill it. Everyone in the care community shares within the bounds of the law, and the default is cooperation, because that’s what’s needed for the health of the patient.
Public health should be no different. Our patients are communities, not single individuals. Data sharing should rely on something more than relationships between individuals who happen to work at different public health authorities covering some of the same people. There should be a culture and expectation for data to be shared, and an embrace of the legal and moral requirements to do so. To back that up, the CDC could tie data sharing to state funding for public health. And federal data modernization initiatives should be set up to make sure tribal nations and TECs are considered alongside states in terms of data access. Public health should look to medicine to see how teams work together and share vital information because we are all there for the same reason—to save lives.
To Boost Energy Innovation, Pull Technologies Into the Market
The future of the United States’ energy system is hotly contested. As Democratic and Republican candidates disagree about the implications of—and sometimes even the existence of—climate change, elected officials in both parties seek advantages for energy resources that benefit their states or districts. Legacy industries battle emerging ones for preferential treatment.
But everyone supports innovation. Both the chair of the Senate Energy and Natural Resources Committee, Democrat Joe Manchin of West Virginia, and that committee’s ranking member, Republican John Barrasso of Wyoming, center and celebrate innovation in their rhetoric. While Manchin hails it as “the key to energy security,” Barrasso writes that the United States must “stay ahead of the curve to stay on top.”
Although such shared sentiments veil profound differences of opinion about technologies, fuels, and other features of the energy system, they formed a vital foundation for bipartisan congressional opposition to the Trump administration’s proposed budget cuts. The “innovation coalition” then supported a series of legislative breakthroughs that began with the Energy Act of 2020, which passed during the lame-duck session after the election in that year, and continued into the Biden administration and the 117th Congress (2021–2022).
Increased federal funding for research and development is a central theme in the revival of US energy innovation policy, leading it to rise by some 70% between 2016 and 2023. This growth in R&D investment aims to expand the supply of opportunities that entrepreneurs and established businesses can draw upon to develop new and improved energy products and services.
Such “supply-push” energy innovation policy has a long pedigree at the federal level. But the legislation of the early 2020s departed from the historical norm by adding substantial “demand-pull” innovation policies to the mix. Recognizing that no battle plan survives first contact with the enemy, these policies appreciate that no prototype achieves wide adoption without significant alterations stemming from user feedback. Thus, demand-pull policies use direct spending, tax incentives, regulatory authority, and other tools to pull innovations into practice by encouraging users to adopt early versions of them, hastening the development of productive feedback loops. By creating market niches in which innovations can quickly evolve in this fashion, demand-pull policies complement supply-push policies.
While demand-pull energy innovation policies are far from unknown in US history, they have never been firmly embedded in a durable bipartisan consensus. Much of the recent demand-pull legislation is temporary, and some members of Congress have targeted key provisions for repeal. But without supply-push and demand-pull measures operating in parallel, energy innovation policy will not achieve its objective of speedily creating a higher-performing, cleaner, more affordable, and more secure energy system. US policymakers, particularly members of Congress, should firm up their support for demand-pull energy innovation policies.
The enduring myth of supply-push innovation
Supply-push innovation policy has its roots in the early Cold War. Policymakers in that period understood the Allied victory in World War II as stemming in large part from support for science, broadly construed. The Manhattan Project exemplified this line of thought, which helped justify massive federal investments in defense R&D. The touchstone for this consensus was Vannevar Bush’s 1945 report, Science, the Endless Frontier, which argued for the creation of a national science foundation to promote American security, economic prosperity, and public welfare.
US policymakers, particularly members of Congress, should firm up their support for demand-pull energy innovation policies.
However, any interpretation that focuses solely on “science” in the narrative of the Manhattan Project (as the recent movie Oppenheimer does) neglects the myriad other factors that were required to turn research in theoretical physics into deliverable weapons. Beyond physicists and chalkboards, it took an army of engineers, builders, and operators, working at sprawling industrial sites around the country—not to mention some $30 billion in today’s dollars—to turn J. Robert Oppenheimer’s vision into the devastating bombs that were dropped in August 1945.
These massive capital costs were supported by policies that enabled demand-pull for the defense industry, and they evolved as the nuclear arsenal proliferated. Over the years, the Department of Defense (DOD) paid to turn ore into fissionable material and to create many essential technologies, including the computers and other electronics needed to simulate and control the weapons. Close linkages between military users and defense technology developers were foundational to US success in the Cold War, and they remain crucial today.
This combination of supply-push and demand-pull was extended into energy policy when the United States sought to develop civilian applications for nuclear technology. Admiral Hyman Rickover’s nuclear navy had been the first adopter of the light-water reactor, which became the dominant design for nuclear power generation. Utilities that followed in the navy’s footsteps were persuaded to do so by federal guarantees for both construction costs and liability limits that reduced the risks of early adoption. Without these demand-pull measures extending far beyond R&D, the reactors that have provided the bulk of low-carbon power for the US grid since the 1960s would have come online much more slowly, if at all.
Why supply-push alone fails to deliver innovation
Although debates over safety have until recently dominated the public discourse, the history of civilian nuclear power offers important lessons for energy innovation policy, highlighting the reluctance of potential adopters, even very deep-pocketed firms, to take financial risks. Today, the energy technology landscape is rife with the conditions that have limited the impact of supply-push policies in the past.
This combination of supply-push and demand-pull was extended into energy policy when the United States sought to develop civilian applications for nuclear technology.
The most salient of these is that energy is a commodity: heat is just heat, and power is just power. Unlike the nuclear navy, most energy buyers are unwilling to pay a premium for such commodities, even if doing so might pay dividends for themselves and other customers—not to mention society and the environment—in the long run. Great ideas that could eventually lead to cleaner, more secure, cheaper, or more reliable energy are frequently ignored by the market. No matter how many potential opportunities for energy innovation the federal R&D funding creates, few will be realized unless customers buy these higher-cost products and services. That is what demand-pull policies are for—and they can play an important role in overcoming these market hurdles.
Another condition that limits supply-driven energy innovation is the complexity of energy systems. As energy systems comprise many diverse and interdependent components and subsystems, their behavior can be hard to predict at full scale. The impact of small changes to the system as a whole cannot necessarily be anticipated at the laboratory bench or even in pilot plants. Compounding the problem is that energy systems are usually “tightly coupled,” so that failure in one component may cause catastrophic collapse across the entire system. Power blackouts, which can cascade across large regions, are prime examples of the perils of tight coupling. Therefore, it stands to reason that utilities and other operators of large-scale energy infrastructures prefer to be technological followers. By contrast, early adopters need to learn how to integrate and operate new technologies while shouldering heavy costs and risks. Only the hardiest private investors are willing to accept such intimidating terms without some help (or arm-twisting) from public policy.
The histories of other energy innovations that have achieved widespread adoption in recent decades show that the nuclear power story exemplifies the rule, not the exception. Solar power was pulled into the mainstream via massive demand spurred by policies like Germany’s feed-in tariffs, which paid early adopters to put in rooftop systems, and California’s renewable portfolio standard, which mandated that a share of utility power sales come from renewables. Energy-efficiency standards and financial incentives, similarly, pulled buyers to substitute efficient compact fluorescent and LED bulbs in place of electricity-guzzling incandescent ones.
Looking forward, analysts say grids that rely heavily on variable renewables will require what is called clean, firm power, either from power plants or energy storage systems. New technologies that promise to provide it at an affordable cost, such as advanced nuclear reactors, enhanced geothermal power, and long-duration energy storage systems, are approaching market readiness. But, due to the costs and risks that face large-scale power system innovations, neither the market nor supply-push policies alone will bring them to maturity. Demand-pull policies will be needed to debug these technologies and give potential adopters confidence that they will perform in real life as advertised.
Doing demand-pull innovation policy right
Despite this evidence and its grounding in theories of risk and complexity, the supply-side bias in US policy persists, particularly among conservatives. Writing for the Heritage Foundation’s Project 2025, widely seen as a blueprint for a second Trump administration, former Federal Energy Regulatory Commissioner Bernard McNamee calls for the Department of Energy to achieve “science dominance” while reducing or eliminating demand-pull policies. “It is one thing for government to engage in fundamental scientific research,” he writes. “Government, however, should not be picking winners and losers in dealing with energy resources or commercial technology. Such government favoritism can crowd out new innovations, devolve into cronyism, and raise energy prices for consumers and businesses.”
Demand-pull policies will be needed to debug these technologies and give potential adopters confidence that they will perform in real life as advertised.
Critics of demand-pull are right to ask hard questions about these policies. The potential for policies to remedy market failures does not mean that they will. Poorly designed policies may be ineffectual or even lock in subsidies indefinitely without spurring innovation. The US Renewable Fuel Standard is the poster child for this pathology. While it aimed to make biofuels derived from agricultural waste as cheap as fossil fuels, it has instead created a large, uneconomical, and environmentally damaging corn-based ethanol industry. The regional power of this industry, its influence in Congress, and the discretion that Congress afforded the executive branch to continue the subsidy, rather than force price and performance improvements in biofuels, led to this outcome.
By contrast, well-designed demand-pull innovation policies cut the premium paid for clean energy over time, enabling markets to grow—at which point government support tapers off. These policies must be designed to provide users with leverage to make trade-offs in ways that meet their needs, encourage competition among would-be innovators, reveal production and implementation challenges, and drive cost reductions and performance improvements in practice.
Public procurement is one tool that may be wielded to achieve these ends. Government customers for energy products and services can specify attributes they are willing to pay for—such as lower-carbon emissions or more secure supply chains—that would typically be ignored by private customers. Bidders for their business must show how they will comply with these requirements. Those who can do so at the lowest cost win federal contracts. Repeated rounds of bidding should narrow or eliminate the gap with conventional market prices, allowing nongovernment customers to join in. If they fail to do so, the government buyers should reconsider their requirements, dialing back their ambition to allow more time for demand-driven learning or shifting to a supply-push approach.
The DOD, for instance, has been an early adopter of advanced microgrid and energy storage systems, which allow military bases to ride out interruptions in electricity service and even provide power to nearby civilian users. By bearing the relatively high costs of newly introduced systems, DOD’s patronage should lead to lower prices for follow-on commercial customers. In other energy technology areas, such as energy-efficient vehicles and buildings, however, DOD and other federal agencies have regularly worked around mandates to “buy clean,” neutralizing any impact that they might have had. Inadequate funding and perverse and complex federal budget and procurement rules contributed to these failures.
Government customers for energy products and services can specify attributes they are willing to pay for—such as lower-carbon emissions or more secure supply chains—that would typically be ignored by private customers.
Tax policy is another potent tool for demand-pull energy innovation policy. Some of the costs of energy products and services that meet specified criteria for cleanliness, security, or other desired attributes can be deducted from the buyer’s taxable income or rewarded with a credit, pulling customers into the market. Stiffening the eligibility criteria or phasing out the subsidy over time should provide an incentive for producers to cut the costs of these innovations to keep them competitive. Without such discipline, producers may get complacent, since they can remain profitable as long as the incentive remains in place without approaching the price set by their unsubsidized competitors.
In recent decades, tax credits and deductions have often been granted to buyers of energy products and services with desirable characteristics. Homeowners and home builders who purchase energy-efficient appliances and building components, for example, may benefit from such policies. According to an analysis by Rachel Gold and Steven Nadel, tax provisions in the Energy Policy Act of 2005 “transformed the market for clothes washers, dishwashers, and refrigerators,” as well as “new homes,” and the eligibility requirements were ratcheted up as market penetration of efficient products rose. In the case of clothes washers, energy-efficient models doubled their market share, from 21% to 42% in just a couple of years. Poorly designed incentives of this sort, however, have at times unnecessarily rewarded buyers who would have made the same purchases without them, such as well-heeled Tesla owners who would have bought these luxury cars due to their appealing image and features without the benefit of a tax break.
Regulation may also create demand for energy innovations. Well-crafted regulatory standards can provide pathways for cleaner or more secure alternatives that are not yet affordable to become so over time. Companies in the regulated industry should compete to meet the standard at the lowest cost, eventually matching or bettering the cost of the incumbent approach. However, if the standard is poorly calibrated, or if unregulated imports enter the market, potential innovations may be blocked, undermining the very objectives the regulation attempts to promote.
US light-duty vehicle fuel economy standards are a case in point. Periods in which they have been tightened have sparked innovations such as hybrid-electric vehicles. Lenient standards, on the other hand, have at times allowed automakers to optimize engine power and sell heavier vehicles rather than improve efficiency or reduce emissions. For example, sales of sport utility vehicles, a previously marginal category of vehicles that were allowed by regulators to be less fuel-efficient than comparable vehicles because they were classified as light trucks, were supercharged in the 1990s to work around the tighter limits on cars.
Recent progress and further reform
In the early 2020s, all three of these demand-pull energy innovation policy tools gained momentum.
Well-crafted regulatory standards can provide pathways for cleaner or more secure alternatives that are not yet affordable to become so over time.
Congress gave the US Department of Energy (DOE) over $25 billion for market-oriented projects to demonstrate low-carbon power generation, hydrogen production, industrial decarbonization, and other technologies. Building on Rickover’s precedent, DOD is once again using procurement to try to accelerate nuclear energy innovation, funding development of nuclear “microreactors.” On the civilian side, over $5 billion in the Inflation Reduction Act was allocated to an interagency “buy clean” program for federally funded construction projects, including widely used materials like steel, concrete, asphalt, and glass. Hundreds of billions more are going to tax incentives for renewables, electric vehicles, carbon capture, and a broad array of other energy technologies. The Biden-Harris administration is also using its regulatory authority to accelerate uptake of carbon capture systems, electric vehicles, and other clean energy innovations.
Such initiatives are important steps toward better balancing supply-push and demand-pull energy innovation policies. But two risks threaten this progress. One is that policies may lapse, expire, or be repealed. For instance, much of the direct spending, including for energy demonstration projects and “buy clean” programs, is one-time funding rather than part of annual appropriations. The circumstances that led to 2021’s bipartisan infrastructure law, which supplied much of this money, will be difficult to replicate.
The other risk is that poor design and implementation may undermine these policies’ effectiveness. Most importantly, some of the tools have been deployed without credible mechanisms to lower costs and improve performance as sales of the targeted innovations grow, risking repeats of the ongoing Renewable Fuel Standard debacle. Tax incentives for purchasing solar panels, for instance, reward established technologies that are already cost-competitive, rather than incentivizing the development of products that may become more efficient in the long run or can be used in locations that lack large open spaces or rooftops needed by the current generation. Congress smartly included a phase-out of these incentives, but the industry is well aware that incentives scheduled to be phased-out in the past have been restored as a result of last-minute lobbying.
Poor design and implementation may undermine these policies’ effectiveness.
The next Congress should take steps to address these risks. It should definitively reject the supply-push-only approach by authorizing demand-pull policies like civilian and defense “buy clean” initiatives and by beginning to incorporate the costs of such policies into annual appropriations. These steps should be complemented by modernizing DOE’s organizational structure to put demand-pull policies on an equal footing with supply-push policies and giving one of DOE’s three undersecretaries the mission of driving energy innovation through demand-side policies. In parallel, Congress should require the executive branch to incorporate mechanisms into these policies that ensure they don’t become permanent subsidies for static technologies. These authorizations will need to be explicit to avoid judicial meddling, which has been made easier by the Supreme Court’s recent rulings limiting the executive branch’s discretion. Congress should use its oversight powers to ride herd on the agencies to this end as well.
Putting federal demand-side energy innovation policies on durable, bipartisan foundations would pay dividends for the United States, strengthening its competitive position in emerging clean-tech industries. It would also be good for the rest of the world, which needs the United States to drive innovation in practice and on a large scale—not merely in principle at the laboratory bench—in order to soften geopolitical shocks to the global energy economy and slow climate change.
The Future of Fusion
The Summer 2024 Issues addresses pressing topics in fusion energy development. In “What Can Fusion Energy Learn From Biotechnology?” Andrew W. Lo and Dennis G. Whyte highlight parallels between the evolution of these industries that offer bountiful benefits yet have faced challenges. As head of the Fusion Industry Association, I thank the authors for naming the FIA as the right venue for open, direct, and transparent communication about fusion’s direction. They also make the critical point that the United States needs to foster a robust commercialization ecosystem that includes government research laboratories, universities, and private-sector fusion developers, as well as companies comprising the supply chains linking efforts.
We also agree with Michael Ford’s statement in “A Public Path to Building a Star on Earth” that funding for fusion research must increase dramatically to meet the needs of both the scientific program and the needs of commercialization. Toward this aim, the FIA has submitted a proposal to both Congress and the US Department of Energy for $3 billion in supplemental funding to accelerate fusion commercialization and build fusion energy infrastructure.
The United States needs to foster a robust commercialization ecosystem that includes government research laboratories, universities, and private-sector fusion developers, as well as companies comprising the supply chains linking efforts.
As part of Ford’s proposed path, he calls for a “coordinated plan for public and private funding.” But I would add a caveat. The fusion community already has made more plans than it has taken action on. Instead, now it is time to execute the plans already agreed upon. The fusion community delivered a comprehensive Long Range Plan in early 2021. The plan, now being updated to reflect advances in fusion technology and ambition since then, acknowledged that without significant increases in funding, DOE would face difficult choices that could reduce plasma physics funding in some areas, in order to provide more robust support for more commercially relevant programs such as materials science, fuel cycles, and public-private partnerships. Without a strong growth in funding across the board, prioritization is necessary.
We agree that DOE’s role is to support fundamental research and enable the growth of the commercialization ecosystem without skewing the competitive landscape—and that means the national labs and companies should avoid directly competing. It also means that DOE should realign its efforts to appropriately fund both commercially relevant programs and the scientific research and development that is needed to build fusion demonstrations. It is time for DOE to treat fusion as an energy source, not a science project, and so it is appropriate to begin the transition to an applied energy office.
Finally, both articles highlight the importance of building trust to support public acceptance. The fusion industry recognizes that engagement with the public, stakeholders, and the broader scientific community is essential to the successful development and deployment of fusion energy. In line with Lo and Whyte’s recommendations, the FIA aims to ensure that all these groups receive timely, clear, and transparent information. Among other efforts, we will communicate about when companies reach milestones for fusion’s progress, providing easily understandable, tangible proof points for policymakers, investors, and the public.
The fusion community is moving forward at speed to be ready for the next phase: focused execution to bring fusion energy to market. The FIA looks forward to collaborating across public and private sectors to ensure that fusion achieves its potential as a clean, limitless energy source.
Andrew Holland
Chief Executive Officer
Fusion Industry Association
Michael Ford effectively highlights the critical importance of maintaining public funding for fusion research, given the technology’s current stage of development. Indeed, the phrase “building a star” arguably understates the task at hand.
In the wake of Lawrence Livermore National Laboratory’s repeated achievement of “ignition” using inertial fusion technology—that is, the production of more energy from a fusion reaction then needed to create it—an increasingly common refrain holds that commercializing fusion is no longer a physics problem, but an engineering one. This downplays the complexity and difficulty of fusion. As Ford rightly points out, there are still significant unknowns regarding which approaches will prove optimal or even viable. The timeline for achieving commercial fusion energy is uncertain, underscoring the necessity for continued fundamental research and development. This foundational work is essential to unravel the complexities of plasma physics and materials science that underpin fusion technology.
The 2022 Inertial Fusion Energy (IFE) Basic Research Needs effort, organized under the auspices of the Fusion Energy Sciences program at the US Department of Energy Office of Science, laid out the core innovations that must be advanced to make IFE a reality and attempted to evaluate technical readiness levels of the key IFE technologies to guide where investment is needed. Today, none of these have matured to the technical readiness levels necessary for use in a pilot power plant, and physics questions abound as we strive to mature them. Because of this, funding for fundamental R&D must remain paramount in the fusion effort, despite the ambitious timelines set forth by the fusion start-up community.
Similar efforts to identify core science and technology gaps should be undertaken for the broader fusion effort; at this early stage, an all-of-the-above approach is called for. Roadmaps and clear metrics resulting from such efforts should be used to hold the private and public sectors accountable and to strategically choose among possible technological options to sustain the value of public funds.
Funding for fundamental R&D must remain paramount in the fusion effort, despite the ambitious timelines set forth by the fusion start-up community.
Fusion is not just a scientific endeavor; it is a strategic asset for US competitiveness and national power. The public sector has a pivotal role in stewarding this technology to ensure it aligns with national interests. Developing public-sector anchor facilities, safeguarding intellectual property, and supporting the supply chain are crucial steps in bolstering the nation’s know-how and economic strength. Public investment in these areas will help secure a leadership position in the global fusion landscape.
While the United States spends less than some other fusion aspirants, including China, the achievement of fusion ignition has put it in pole position. That lead is hard-won, resulting from decades of public investment and innovation. It can be easily lost.
We are convinced that a world powered by fusion energy is achievable. It is not a question of time, but one of resources and political will. Sustained investment in a foundation of science and technology will bring this future into focus.
Tammy Ma
Lead, Inertial Fusion Energy Institutional Initiative
Lawrence Livermore National Laboratory
Andrew W. Lo and Dennis G. Whyte draw four specific lessons for fusion from the biotechnology industry. The exhortation to “standardize milestones” is particularly important. The authors suggest a consortium for identifying the right milestones, but it remains critical to explore the unique aspects of fusion in contrast with biotechnology and other fields to find a model that will work.
Unlike the Food and Drug Administration, fusion’s US regulator, the Nuclear Regulatory Commission, does not have a mandate to regulate efficacy. Rather, the NRC’s mission relates to safety, common defense, and environmental protection. This sensibly reflects the fact that market forces alone are sufficient to ensure that fusion works (i.e., it generates useful energy economically). This presents an underappreciated opportunity for the fusion industry to take advantage of the benefits of standardized milestones without the expensive and time-consuming formality that the FDA correctly imposes on the biotechnology industry.
Consulting firms that evaluate the claims of fusion companies for investors are appearing as a result of these market forces. Though useful, consultants and their reports don’t provide the structural benefits that standardized milestones could bring to the entire industry in the form of on- and off-ramps for different groups of investors and scales of capital as Lo and Whyte discuss.
Identifying milestones that are meaningfully applicable to all approaches to fusion energy, an objective arbiter of those milestones, and an appropriate rating system is an important next step in the development of the fusion energy ecosystem.
Because of this missing piece, some fusion investors and fusion companies themselves are clamoring for such a set of standardized milestones. Some have emerged organically. The Department of Energy’s Milestone-Based Fusion Development Program issues payments based on the completion of benchmarks proposed by the companies themselves and negotiated between the companies and DOE. Most recently, Bob Mumgaard, CEO of Commonwealth Fusion Systems, published an open letter, titled “Building Trust in Fusion Energy,” that lays out six milestones on the path to fusion energy, many of which are similar to milestones for funding in the ARPA-E Breakthroughs Enabling Thermonuclear-fusion Energy (BETHE) program.
However, these are not the right entities to independently develop and arbitrate standardized milestones for fusion. Although DOE is equipped to judge whether a milestone has been completed, relying on DOE (or NRC) for broader oversight is to give up the advantage that fusion has to manage milestones in a more lightweight and nimble way outside of government. Nor are fusion companies, investors, or the Fusion Industry Association appropriate organizations for this job, for obvious conflict-of-interest reasons. Instead, a nongovernmental, independent rating organization is needed.
There are lessons here from the finance industry. Agencies such as Moody’s and Fitch Ratings play an important role in providing information to investors about the creditworthiness of companies and the likelihood that bonds will be repaid. However, their business models rely on payments from the entities being rated, and the review process is not especially transparent, both of which were factors that led to the 2008 financial crisis. Fusion could do better by developing a different business model that decouples the ratings from payments made by the entities being rated and by emphasizing the importance of publishing data on milestone completion in peer-reviewed journals.
Identifying milestones that are meaningfully applicable to all approaches to fusion energy, an objective arbiter of those milestones, and an appropriate rating system is an important next step in the development of the fusion energy ecosystem. This should be an iterative process involving companies, investors, and academia. Success will require creativity in balancing competing interests, and an evenhanded assessment of the science, engineering, economics, and social-acceptance challenges facing the nascent fusion energy industry.
Sam Wurzel
Fusion Energy Base
When My OB/GYN Said He Didn’t Understand Poetry
Illustration by Shonagh Rae.
I worried because my body is a more complex text. When he feels the shape of my uterus, he may not think pear-shaped yet an apricot in size, hollow butternut squash, lightbulb. He may not consider it a bowl for a daughter developing inside with eggs for her daughters, a set like Grandma’s Tupperware poised to seal away meals, or nested like Russian dolls, copies waiting to be twisted off, revealed. My doctor speaks the body’s language: uterus tilted toward spine could mean incarceration—womb snagged on the pelvic bone. Almond-shaped ovaries pocked like plum pits—if swollen with movable lumps— could be dermoid, endometrioma, or chocolate cysts. Or nothing to worry about. He questions structure, unpuzzles chromosomes, scrutinizes tensions between biopsies and blood work, and reads all this alongside testimony and history because my flesh, like a poem, carries mystery: it produced one child complete But jettisoned the next four. My doctor’s glossing of my uterine purse—whether it will fill and stay full or remain empty—eludes his science. But when I build a nest of words, paradox and ambiguity kiss each time, offspring running down the page.
The Politics of Recognition
As I was reading Guru Madhavan’s “Living in Viele’s World” (Issues, Summer 2024), my thoughts turned to studies of occupational prestige—in other words, the perception that some types of work are more deserving of admiration and respect. Historians and social scientists who examine occupational prestige pursue lines of inquiry that spread in many directions, including the implications for individual self-worth, differences in salaries, longitudinal trends for the American labor force, and more.
In his eloquent essay, Madhavan demonstrates the importance of seeing the actions of two elites in nineteenth-century New York, Egbert Ludovicus Viele and Frederick Law Olmsted, within a social scientific setting. Although these men brought different technical points of view to the design of crucial elements of New York’s infrastructure, Madhavan’s point is that we will understand their legacies more deeply if we see their work as part of a broader contest for authority and prestige.
Madhavan’s invocation of the politics of recognition—a concept with its own rich scholarly tradition—is a compelling way to think about engineering and society. In particular, it expands our conceptual language for considering the normative consequences of infrastructural decisions, including the ways that these decisions can either facilitate or inhibit equity and human flourishing.
Many young people self-select into occupations that are seen as prestigious and forego career paths that lack glamor or respect.
In our 2020 book, The Innovation Delusion, Lee Vinsel and I argued that the trendy preoccupation with innovation, and the resulting elevated prestige of innovators, carries steep societal costs. These costs include the neglect of maintenance (made familiar with the dismal grades regularly registered in the American Society of Civil Engineers’ Infrastructure Report Card) as well as diminished prestige for the people we called maintainers—the essential workers who care for the sick, repair broken machines, and keep the material and social foundations of modern society in good working order. Vinsel and I challenged society to reckon with the caste-like structures that keep janitors, mechanics, plumbers, and nurses subordinate to other professionals. This line of thinking also sharpens our understanding of the stakes for the present and future, namely, that many young people self-select into occupations that are seen as prestigious and forego career paths that lack glamor or respect.
As a result, there is an oversupply of young people who want to get into “tech,” even as the giants of Silicon Valley continue to lay off workers so that they can keep wages low and stock prices high. At the same time, there is an undersupply of young people who want to work in the skilled trades, where there are national shortages and good careers for people who want to work hard, uplift their communities, and care for the needs of their fellow residents. Closer attention to the politics of recognition in engineering—indeed in all occupations—can help Americans understand how we arrived at our present state, overcome some of our elitist prejudices, and recalibrate the relationship between occupational prestige and the workforce that the nation actually needs.
Andrew L. Russell
Provost
SUNY Polytechnic Institute
Guru Madhavan brings to life the efforts by Egbert Ludovicus Viele to improve the urban environs of New York City by working with nature. The article beautifully explains Viele’s thinking and influence on development in the city, which was both groundbreaking and effective, and continues to this day.
However, Madhavan’s main argument is that Viele should be as venerated and celebrated as an innovator of urbanization as Frederick Law Olmsted. Olmsted was a contemporaneous landscape architect who had clashes with Viele at both Central and Prospect Parks in New York. The author continues Viele’s own efforts to aggrandize his life’s achievements, evidenced by a 31-foot-tall pyramid tomb, which at the time of Viele’s death was the largest in the West Point cemetery. Sadly for Viele, historical figures cannot and do not choose themselves. Many people deserving of recognition are long forgotten. Historic figures are raised again only if their contributions and stories are relevant to contemporary times.
But is it right or even necessary to have historic heroes? Neither Viele nor Olmsted worked in isolation. They were connected to a plethora of colleagues, clerks, workers, supporters, and ecologies who helped them achieve their projects. Should we not instead celebrate eras that allow innovations to be made? That recognize the beliefs, values, economic systems, other people, social systems, and infrastructures that create the circumstances for these changes to how we live our lives?
If we celebrated the circumstances, not just a hero, we would see how people are supported to achieve their goals. We could then understand that it is not just an individual who achieves, but generations of effort and resources that enable these goals to be achieved.
If we celebrated the circumstances, not just a hero, we would see how people are supported to achieve their goals.
Most people know that Wolfgang Amadeus Mozart was a prodigious musician and composer, and many might know he had a father and older sister who were also elite musicians. People might think that the father must have started training his children at a young age, but do they also think that his father developed the network that allowed him to perform in royal courts? That there were many royal courts to support live music performances? That he was male in an era when women were curtailed from having musical careers? That he was born at a time when musical notation was common, and his music could easily be reproduced by other musicians? Most people do not think of these things, but think only of the hero—Mozart. If they did, they would realize that, rather than hero, he was the mushroom fruit of a huge mycelium network pulling in resources from countless places.
Perhaps if Viele spent more of his life acknowledging all the other efforts that went into enabling the achievement of his projects, he would also be better remembered for his contributions. As it stands, the most memorable part of his story is his peculiar solution—a buzzer inside his coffin—in case he was buried alive.
Tse-Hui Teh
Assistant Professor, Bartlett School of Planning
University College London
Ending Inequities in Health Care
The United States spends more on health care than any other high-income country, yet we have some of the worst population health outcomes. Our health care system is designed in such a way that racial and ethnic disparities are inevitable, and the differences are extreme: the life expectancy difference between white women and black men is over a decade. How can we fix the system to ensure health care equity for all?
On this episode, host Sara Frueh talks to Georges Benjamin, cochair of the report committee and executive director of the American Public Health Association. They discuss how the health care system creates disparities and how we can fix them.
Resources
Read the National Academies reports on health care inequality:
The United States spends more on health care than any other high income country, yet we have some of the worst population health. Racial and ethnic disparities are an enduring feature of American health care, and in fact, our system is currently designed in such a way that disparities are inevitable. How can we fix the system to ensure health care equity for all?
I’m Sara Frueh, an editor at Issues. I’m joined by Georges Benjamin, cochair of the Report Committee and executive director of the American Public Health Association. On this episode, we’ll discuss how the health care system creates disparities and how we can fix them. Dr. Benjamin, welcome.
Georges Benjamin: Sara, thank you for having me today.
Frueh: You’ve been a leader in the world of public health for a long time, but for listeners who may not be familiar with you, can you tell us a little bit about yourself and how your work during your career has intersected with the problem of health inequities?
Benjamin: Oh, thank you very much. Actually, I am an internist with the subspecialty of emergency medicine. I trained in the Army Medical Corps for about nine years, and then I went out and practiced in the private sector, having served as the chair of the Community Medicine Department at the City Hospital in Washington, D.C. I was a D.C. health commissioner for a couple years. I served in the fire department as a deputy fire chief to run the emergency medical system for Washington D.C., and then eventually I ended up in Maryland, both as a deputy secretary of health and then the secretary of health for the state. And I’ve now been at the American Public Health Association for about 22 years practicing public health.
You cannot be in an urban setting, for sure, and not see differences in both outcomes and the way people are treated.
Frueh: You mentioned that you worked for a fire department and as a doctor, and I’m wondering if you saw these health inequities and health care inequities up close during that work?
Benjamin: You cannot be in an urban setting, for sure, and not see differences in both outcomes and the way people are treated. When I was at the City Hospital, we would get patients transferred to us who were uninsured. And so, they’d gone to a hospital, and they were seen, usually quite briefly, and then put in an ambulance or sent by car over to the City Hospital, because we were there for the uninsured, for sure, but because we were at City Hospital.
And these people were often in a higher degree of illness, they usually had few resources, and when we took care of them and we discharged them, we knew that they were also going back into their homes, into the community, in really trying conditions. Some were there… didn’t have medication, have money for medications. They often were food insecure. And if they were going back into a violent environment, they were returning, in some cases, to the environment which actually got them in the hospital in the first place.
Frueh: Thank you for that. I want to quickly check in about the study that you recently chaired, pulling back to a more abstract level. What did your study look at and what were you trying to learn about health inequities?
At the end of the day, the biggest challenge is the way the system is fundamentally designed, financed, and delivered, which gives you these inequities.
Benjamin: One of the things that we wanted to do with our study is look back and say, “Okay, what’s happened over the last 20 years since the last time the Academies have looked at health inequities?” 20 years ago when the Academies did the health inequity study, what they discovered was that there are many things in our society that both impede your ability to get health or enable you to be healthy, and we call those the social determinants of health. And on that list of things included racism and discrimination. And of course, there was a great howl when that happened because all the people in the health care world said, “Oh, no, no, no, no. We’re not discriminating against people. We don’t treat people differently.” But the truth of the matter is we do. There are both conscious and unconscious bias.
But what we tried to do now in our study, we took a systems look at health care inequities. And what we basically discovered was that over the last 20 years, not a lot of progress has been made. Doesn’t mean that we haven’t had any progress, but not substantive progress. And more importantly, the way we have designed our system in our country is designed to give us the inequities that we’re getting. So really, in many ways… there are structural racism, there are system issues, there’s obviously bias in the system, for sure, they’re biased with individuals, but at the end of the day, the biggest challenge is the way the system is fundamentally designed, financed, and delivered, which gives you these inequities.
Frueh: Can you say a little bit more about that? How this operates on a system-wide level to create this? How did those systems get in place? What’s perpetuating them? What does that look like on a system-wide level?
Benjamin: We’re the only industrialized country in the world that doesn’t have health insurance coverage for all of its citizens. So, let’s start with that. When the Affordable Care Act was passed, they actually designed a system which got all eligible citizens in the system, and then the Supreme Court said, “Well, it’s okay, but you cannot force every state to cover the Medicaid population.” And so, we have about 10 states in the nation that have not expanded coverage for their Medicaid population. Those people, in many cases, go uninsured. So, we still have 20 plus million people in our country who are uninsured. And we know when we look at the studies that in the states that have expanded Medicaid and the states that have not expanded Medicaid, the states that have expanded Medicaid, the health care status of those populations are much better just because they have access to health insurance coverage.
We’re the only industrialized country in the world that doesn’t have health insurance coverage for all of its citizens.
Another example is the way we pay for providers to take care of people. The provider rate for Medicaid, for Medicare, and for private insurance are different by magnitudes of payment. And increasingly, we’re seeing more and more providers who not only won’t take Medicaid insurance, but they won’t take Medicare insurance. We’ve also now seen more and more, where, as our system has evolved, as costs have gone up, that both people who get their insurance through their job, as well as people who are buying their insurance through the open market, increasingly the out-of-pocket costs, not just the cost of the insurance policy itself, the premium, but the out-of-pocket costs for each encounter are going up more and more. And so the costs to the individual are being shifted. And so increasingly, we’re not only seeing people who are uninsured, but we’re seeing more and more people who are underinsured. And one of the leading causes, and in fact some studies, the leading cause of bankruptcy in this country are health care costs. So, it has an economic implication, it has a health implication.
And then when you add the fact that increasingly in our rural communities, again, many in states where they didn’t expand Medicaid, rural hospitals are closing very quickly. And so we’re now having people who don’t have access to any kind of health clinic in their community, no hospital at all. We also know that increasingly in our country, more and more communities don’t have anyone to take care of women’s health. So, the number and access to OBGYNs is a problem in our country. So, access to care, payment of care, and then this complicated system in which we transfer people from one system to another, it’s really a growing mess that needs to get fixed.
Frueh: It sounds like these problems are very broad and reach a lot of the population in the US, but it also sounds like from your report that these problems disproportionately affect some racial and ethnic groups more than others. What groups did you look at that are bearing the brunt of this health care system?
Benjamin: Yeah. We talk a great deal about minoritized populations, and these are populations that tend to be communities of color—African-American, Hispanic, Native American communities—bear the brunt, but I want to point out that it depends on where you are. So, there are disparities not only between race and ethnicity, but you also have disparities between urban and rural populations. So, you go to the Appalachia, it’s lower income whites who are disproportionately impacted. You’ll see that in the South where you have more poverty, and the communities, regardless of race or ethnicity, you’re seeing these disparities. Although, when you look at people of the same income level, you also see these disparities based on race.
One of the more interesting phenomenas, of course, is even when you have people who have the same insurance coverage, in many situations, people who are African American or who are Hispanic find that their health outcomes are far worse even when they have the same fundamental assets. And then you have to ask yourself, the question is why. And it’s a complicated question, but sometimes it comes down to just access to care, differences in the quality of care received within the health care setting, and in some cases differences in how people seek care based on preconceived notions of what they can get.
Frueh: What are the on the ground impacts of these inequities and care for individuals, for communities, for our nation as a whole?
Why is that in a city 10 square miles that you can’t provide comprehensive, adequate prenatal care services to the women and children in this city?
Benjamin: At the end of the day, it means that we have a population of people with much more heart disease based on race and ethnicity, more lung disease, more kidney disease, more diabetes. You have more amputations and more preventable things that should not happen if people got care early.
You asked me before, what were some of my prior experiences? One of the most interesting things that I had to deal with as a health commissioner was dealing with infant mortality. That’s a child who does not live beyond their first birthday. And in Washington D.C., we have some of the best neonatal services in the country, and yet we have one of the worst infant mortality rates in the nation. Why is that in a city 10 square miles that you can’t provide comprehensive, adequate prenatal care services to the women and children in this city? The fact that in our nation, we still have far too many African American women who are dying disproportionately in childbirth, that’s a problem.
Frueh: You raised the question of, in a city with so many resources, why the maternal mortality rates are so high? And I’m sorry, but I’m going to turn that question back to you. Do you know why there’s this mismatch between the amount of resources we have and the outcomes for these women? What is happening to impede their access to that or otherwise damage the outcomes for patients?
Benjamin: Far too often, we have a mismatch between the providers providing the care and the patients, and the patients’ needs. And then there are issues where we have patients that just don’t communicate well with their providers. And again, that lack of adequate communication where the provider is not listening attentively enough or the patient’s not able to communicate effectively enough with their provider, that physician-patient relationship is so essential to making sure that people have good quality health care. If you are someone who is insured, but your marginal income is such that your doctor’s having to make decisions about which antibiotic they prescribe for you when you get a urinary tract infection when you’re pregnant, that’s a problem, because they may not be able to optimize your care, because you may not be able to afford the prescription that is best suited to take care of the infection that you have. And so, there’s a compromise made there, and that’s not satisfactory.
Frueh: It sounds like this is a very multilayered complicated problem, and I’m wondering how we start to fix it. What did the committee recommend as far as solutions that could finally start to close these equity gaps?
The committee does believe very strongly that it’s fixable, that we have lots of tools, all the tools we really need, to begin to solve this problem.
Benjamin: Well, to start with, the committee does believe very strongly that it’s fixable, that we have lots of tools, all the tools we really need, to begin to solve this problem. Number one, we need to get a system with everyone in and no one out. Secondly, we need to strengthen our oversight and accountability systems. We have something in law that came under the Affordable Care Act called the Community Benefit Assessment that every hospital is supposed to do. They’re supposed to look at the needs of their community and then to wrap some of their programming around the needs of those communities. We think that can be strengthened.
We think that dealing with implicit bias is certainly important. We know that we all bring life experiences to the table that influence how we think about things and how we care for people beyond the science that we know. So, we think that increasing diversity of the workforce, so that we bring a range of people with life experiences to the table, is important. We believe very strongly that we should put more emphasis on primary care and comprehensive care. We felt team care is very important. A lot of evidence that putting care together with teams is important. And by the way, there was another study that the academy did that looked at primary care, which strongly made the case for primary care and team-based care.
We believe that we ought to expand at the national level—realizing that licensing is done at the local level—the ability of providers to practice at their full range of training. During COVID, we expanded both the number of people who gave shots, and within their scope of practice and within their training, and we were able to very rapidly improve the ability to vaccinate people during COVID.
COVID was a terrible experience for all of us. A million people died, and that was a real tragedy, and we had huge disparities, particularly in those communities of color doing COVID. Having said that, we, in effect, covered everybody because we expanded coverage. We paid for the vaccine, we paid for the testing, and we improved access to health care. And then, of course, after the emergency, we went back to the same system we had before, and a lot of those disparities are now reemerging.
Frueh: The first issue you noted about getting everyone into the system and that the committee thought that that was important, did you delve into how exactly we could make that happen, making health care more accessible to all?
We ought to behave and fund what we do: our behavior as well as our values.
Benjamin: We clearly have to work on those 10 states that have not expanded Medicaid. We’ve got to either find both the political and the fiscal leverage to get them there. We continue to encourage them. The Center for Medicare and Medicaid Services, that is the agency that funds Medicare and the Medicaid program at the Department of Health and Human Services, and they are working to try to encourage states to do that, to try to offer them opportunities for waivers, to experiment with new ways to get people covered. But fundamentally expanding the Medicaid program to all the states is absolutely an important first step.
People who have policy chops that I do say health care is a fundamental human right, and we have people who say that, “Well, we don’t necessarily believe that.” Except the minute anybody gets sick, we treat it as a fundamental human right. We don’t let them just lay on the street. We pick them up, we take care of them at an enormous cost. And I argue that isn’t it more efficient for us to do it in a structured, organized system by making sure everybody’s in the system?
And we ought to behave and fund what we do: our behavior as well as our values. And if we can fund this in a way that we get everybody in the system and treat everybody with the respect that they deserve, then we’ll finally get that system up and running. But it’s been challenging. It really has been challenging because the people that are opposed to this find every excuse in the world to avoid expanding the Medicaid program. And by the way, it’s often for… We think about this as an issue around single adults, but in some of these states, the eligibility level, even for women and children, is so low from an income perspective that it’s laughable. So, we need to get everybody in the system.
Frueh: Another thing the committee recommends is strengthening oversight and accountability systems. And I’m wondering if you can say a little bit about that? What do we have now as far as oversight and accountability? And what needs to happen to make those systems stronger and better functioning?
We aren’t really looking at the disparities between the various populations. We don’t fund it as much as we should. We don’t train the researchers that we need to train to do this kind of research.
Benjamin: At every level of government, there are opportunities for us to engage the private sector in ways that hold them accountable for the outcomes that they get. Far too often, we put out rules that say if a hospital has a person that needs to be readmitted, then we will, and it happens far too often, then we will fiscally punish the hospital. But we really don’t work as hard as we could to address this issue of readmission, for example. We know that falls is a big issue. We are really concerned about falls in hospitals, but we haven’t really funded the research to really understand why people fall. It’s a leading cause of morbidity and mortality in the population, and yet the amount of money we spend on this kind of really functional research is really limited. We don’t look at disparity research as aggressively as we should. We aren’t really looking at the disparities between the various populations. We don’t fund it as much as we should. We don’t train the researchers that we need to train to do this kind of research.
And one of the recommendations of our committee was to expand both the diversity of the research pool and build fundamental infrastructure to allow this research to occur, particularly when we want to engage communities. We’ve always had this challenge getting minorities into research studies, but we had a really good response with the vaccine studies during COVID. And one of the reasons we were able to do that is they were able to build upon existing community-based research programs for HIV, AIDS and expand those programs because they were already in the communities, they were engaged, they had competent researchers involved, and they were able to expand those programs for an additional infectious disease, i.e., COVID. And because of the relationships they already had in the community, they were able to get more people in those research studies and show that the vaccine was safe and effective across a variety of populations based on race, ethnicity, age, gender.
And the challenge with this going forward is keeping those research enterprises what I call warm. So, keeping them in place, giving them the infrastructure. What happens far too often is we do a research study, and then we take apart all the infrastructure, and we don’t use that group we put together for anything else, or we let it fall apart, and then we have to rebuild it. And so, one of the things our committee wanted to do was say, “Let’s keep these community-based programs warm. Let’s keep them involved. Let’s keep them engaged. Let’s fund them for other things. Let’s expand their scope in ways that allow them to do the kind of community-based participatory research that we know gives us results.”
And then I think, finally, what we did on the research front was in health and health care, like in a lot of sciences, dissemination is a challenge. For many researchers, publishing that paper is very important, and then you move on to the next thing. And if someone doesn’t read your paper, they don’t know about what you’ve discovered. Well, when we have a lot of programs in medicine that we have done as pilots, we love our pilots, and we do these pilot programs, and then they show their worth, and then we don’t disseminate them aggressively. There really isn’t money to scale them up. Scaling up is very important, and often customizing it, because all communities aren’t the same, is very important. So, the committee recommended very strongly that we find ways to rapidly disseminate things that we find that work, and then, of course, rapidly disseminate things that we’ve discovered don’t work, so that people don’t make the same mistakes over and over again.
Frueh: Can you give some examples of some interventions that do work that you’d like to see scaled up, and some interventions that don’t work, which you keep seeing people using anyway?
We know that improves the care for patients, and yet we haven’t figured out how to fund it adequately to scale it up so that we can do more patients.
Benjamin: Well, I can say that putting together team-based care, and there’s a range of primary care models that we know work. And we keep reinventing the wheel. We keep saying, “Boy, if we put together a pharmacist, and a nurse practitioner, and a primary care clinician, and a social worker, and a team, boy, that’s a wonderful idea, will that improve the care for patients?” And we keep doing that. We know that improves the care for patients, and yet we haven’t figured out how to fund it adequately to scale it up so that we can do more patients. That’s an example.
We know that social determinants for health, those things on the societal side that make a strong difference. We know that food insecurity is a big issue. And every time someone comes in and has some challenges with their weight or is malnourished, we’re very quick to write them a consult to go see a nutritionist, but we don’t address the fact that this person works two jobs and the fact that the community they live in has no access to a full service grocery store that serves fresh, affordable vegetables, for example. And yet we know the health department can work with the Chamber of Commerce and can work with the economic development people in the government to address these, what we call food deserts in communities so that people have more access to affordable nutritious foods.
We know that education is a social determinant. It turns out that the more education that a woman has, the more likely her child is to survive their first year life. That educational level is a surrogate for some other issue, but we know that that correlation does occur, and for every society in which they discover that, all around the country, all around the world, that relationship has held. We know that if a person gets out of high school and gets their high school diploma, that their health is better. So, there’s these social things that we know we can address. And what the committee did was we looked at these societal issues, and we wanted to make sure that they were recognized as important as far as the process.
Frueh: You’ve talked about all these different levels of policy and society that have things they need to work on. Policymakers need to expand access and deal with some of the social determinants of health. The research structures need to have more partnerships with communities. Health systems need to do more team care. Is there anything that individual health care providers, doctors, nurses, other practitioners, can and should be doing around this issue?
If they don’t feel they treated fairly in the health care system yesterday, they’re less likely to trust the system. They’re less likely to come back.
Benjamin: Because we all carry biases to the table, everybody should think about how their offices are constructed, how people are treated in their office. Do they have barriers so that people who come into the office aren’t treated in a disparate manner? One of the issues we always talk about is trust. During COVID, again, there was a lot of issues around trust around vaccines and trust of the health care system. And we often said that, well, we have these historical wrongs that have occurred, like the Tuskegee experiment and things that happened to African Americans during slavery. But while those were certainly in the distant past that what people were really responding to was how they were treated yesterday. Now, if they don’t feel they treated fairly in the health care system yesterday, they’re less likely to trust the system. They’re less likely to come back.
So, that relationship between the provider and their patient expands to the provider and the provider’s office, office staff, the welcoming of those people in that office. And so, there are many things that we can do on the clinical care side of the house to ensure that people are treated like customers. Now, often, people get upset when I say patients are customers. They’re patients, but there is a customer edict that we have to follow where when people come to us for care, that we treat them with respect and dignity, and that our offices and our environment in which we care for them is welcoming, that we treat them with the kind of respect and dignity that they deserve. And I just have to say that, while certainly on the whole our profession does that, there are instances where I’ve seen that not happen, and that’s a challenge. And we do have systems that are just structured to be unwelcoming, and we need to take those systems apart.
Frueh: One last question. Not a lot of progress has been made, as your report says, since 2003 on this issue. And I’m wondering how much optimism do you have, or not have, that we can make real progress on this issue in the 20 years ahead? And if you are hopeful, what is that grounded in?
I believe that the nation is poised for the next phase of health reform.
Benjamin: Well, I’m absolutely hopeful, and I’m hopeful because I think that while we’ve had incremental progress, I think the committee said not enough. And I think now we’re in a position where we kind of understand fundamentally structural racism, discrimination. We’ve looked at the system through our study from a systems perspective. We’ve identified some really key issues that we can address. And I believe that the nation is poised for the next phase of health reform. That next phase of health reform should include expanding coverage to everyone. We can do that through the Affordable Care Act, primarily. We can address payment reform, so we can reduce the cost of the health care system. We’re doing a lot already on prescription drugs.
One thing we can do is try to decrease the variance in reimbursement between the various insurance plans that we have. That means Medicare, Medicaid, and private insurance. By narrowing those gaps, by doing some introspection in our own practices and our own selves so that we can provide the kind of care and welcoming environment that our patients deserve. And I think that we’re in a breakout moment. As we begin to expand access to more research that the next generation of researchers that we need to develop, the systems are in place to do that. We just have to adequately fund them, and I believe that we can and we will.
Frueh: Thank you, Dr. Benjamin, for taking the time to speak with us today, and it’s encouraging to hear your optimism on this issue. To learn more about ending structural inequality in health care, visit our show notes to find links to the report and more of Georges Benjamin’s work.
Please subscribe to The Ongoing Transformation wherever you get your podcasts, and write to us at podcast@issues.org. Thanks to our podcast producer, Kimberly Quach and our audio engineer, Shannon Lynch. I’m Sara Frueh, an editor at Issues. Tune in on November 19th for an interview with Guru Madhavan about engineering and the wonders of often overlooked infrastructure.
Leaving No-Woman’s-Land
Illustration by Shonagh Rae.
Lindy: About two years after my surgery for ovarian cancer, I had a moment that I remember with utter clarity: I was standing by the kitchen counter, looking down at a magazine article about menopause, and I read this sentence: “After the surgery, I found myself in a wasteland of desperate, incoherent blog posts, trying to understand my condition now that, technically, nothing was wrong with me at all.” I gasped and thought, Oh my God, this is me. Chemotherapy combined with a ruptured disk had left me with nerve damage that caused lower back and leg problems that prevented me from even walking fast. Getting out of bed and getting dressed was a painful marathon each morning. Everything below my chest seemed to be in rebellion. My bowels did not work well. The skin of my vulva was so sensitive I could barely touch it. Every night, I woke drenched in sweat. I felt like I had the flu all the time, and sex was impossible. I could not fully comprehend that I was no longer the person I had previously considered myself to be. And in that moment when I did confront the terrifying gulf between who I was and who I used to be, I had no idea how to begin to recover.
My cancer was discovered following surgery to remove an ovarian cyst. My doctor explained that I needed another surgery immediately: a complete hysterectomy and removal of ovaries. I was almost 50, and grateful for the excellent care and the good prognosis. What no one mentioned to me was the very real possibility—according to some studies of survivors of pelvic cancer—that I would never be able to have an orgasm again or that I would experience other sexual dysfunctions. In all the meetings and consultations about what to expect, the impact of the procedure on sexual pleasure never came up.
I am grateful to my doctors—I owe them my life. But they were silent about the sexual side effects of my treatment. Trained and immersed in a culture unused to acknowledging women’s pleasure, my doctors’ avoidance of the topic led me to believe that, given the magnitude of having survived cancer, asking for anything more was outside the norm. And, in a country where women’s sexual health is increasingly viewed through the lens of a political struggle over reproductive health care rather than feeling pleasure, my predicament left me in a no-woman’s-land.
After that realization, I set off to find doctors on my own, and I’ve been able to find support that has eliminated or amended what I felt that day at my kitchen counter. Now, a decade along, my body is working pretty well again. But it’s been almost a lost decade for me. I am sure I am not the only woman to be bewildered by sexual issues that my medical caregivers were unprepared to address.
We three authors have given the area of women’s sexual health a lot of thought—Elkins-Tanton as a patient, Kling as a doctor, and Collina as an educator—and believe that the health science community, is, at long last, in a position to fully embrace this neglected but key component of health and well-being.
The science and clinical practice of women’s sexual health has long been tangled in moral, legal, and political angst around both sex and reproduction; that angst is intensified by a popular culture that sees women’s sexual pleasure as simultaneously frightening and highly marketable. It’s a loop of avoidance: research dollars steer clear of the treacherous whirlpool of female sexuality, limiting what we know. Medical education doesn’t make time for what little is known, and since it’s not much, clinicians don’t raise the issue with patients. That silence sends a clear message to women: their sexual health does not matter. And the avoidance continues.
This neglect is compounded for people who have trouble accessing medical care or come from marginalized backgrounds. Trans women and other sexual and gender minority communities face an even deeper silence about sexuality, as well as blatant discrimination that impacts all aspects of their health. What’s more, perceptions about race, weight, and many other factors shape the conversations that happen in the clinician’s office. Add time constraints and myriad cultural taboos around talking about sex, and you have a perfect recipe for ignoring sexual health.
The science and clinical practice of women’s sexual health has long been tangled in moral, legal, and political angst around both sex and reproduction.
And yet, now is a crucial moment for women’s health research to break this cycle—both by ensuring that research priorities reflect the importance of women’s sexual health and by finally incorporating sexuality into medical education and practice. More than 30 years after the National Institutes of Health established the Office of Research on Women’s Health, researchers have a much clearer understanding of sex as a biological variable and the remarkable variety of sex and gender expression. There remains a significant gender gap in health care research, but efforts are underway to rectify historical underinvestment. The Biden administration has launched the first White House Initiative on Women’s Health Research. This summer, the US Department of Health and Human Services committed $100 million to transformative research and development in women’s health. Both are important steps toward a future where women’s health is robustly researched and better understood, but unless specific actions are undertaken to include women’s sexual health, today’s culture of neglect and avoidance will be perpetuated.
Women’s sexual health
The World Health Organization defines sexual health as “a state of physical, emotional, mental, and social well-being in relation to sexuality.” It specifically notes that this goes beyond “the absence of disease, dysfunction, or infirmity.”
Sexual health is completely distinct from reproductive health, although they can be related. We are sexual beings throughout our adult lives, not just during our so-called reproductive years. And of course, desire and sexual expression are shaped by gender norms as well as power and privilege.
Older women report that sexual enjoyment is important for their overall health. But, as a group, women are not experiencing sexual well-being.
Research suggests that sexual function is linked to overall well-being. It has been documented that impaired sexual functioning can be associated with depression; it’s also associated with decreased quality of life, relationship dissatisfaction, and poor self-image. When surveyed, older women report that sexual enjoyment is important for their overall health. But, as a group, women are not experiencing sexual well-being. One study using data from 1992 found that 43% of women reported experiencing a sexual problem (most commonly, low desire) compared to 31% of men. A 2008 study again found that 43% of women reported sexual problems, and 22% reported that the problem caused them personal distress. In other words, one in five women is experiencing sexual-related distress. For some women, this is debilitating. And while we strongly caution against any numeric or goal-oriented definition of sexual health, it is worth noting that a gendered “orgasm gap” is well established; women in heterosexual relationships experience orgasm less frequently than men in heterosexual relationships (and less than both women and men in same-gender relationships), with cis women experiencing 22–30% fewer orgasms than cis men during heterosexual sex.
Women’s sexual health care
Despite the prevalence of sexual concerns among women, physicians rarely ask about them, and when patients themselves raise the issue, clinicians often give the impression of indifference. A 2003 study of 3,800 women found that over half said their physicians did not seem like they wanted to hear about their problem, found it interesting, or appreciated its significance. Fifty-one percent indicated their physician was reluctant to treat the problem. When asked, clinicians cite time constraints, fear of offending the patient, inadequate training, a belief it’s unimportant, and/or insufficient knowledge. And they may not have much to go on: a global study of medical society guidelines for sexual dysfunction found that 61% of such materials focused on men’s issues.
Another reason for doctors’ reticence may be that clinicians have limited educational opportunities for learning about sexual health. Sexual health and well-being are not adequately covered as part of a standard medical school education. Even though many medical societies recommend including sexual health education in medical training, programs training nurse practitioners, midwives, physicians, and physician assistants dedicated only 3–5 hours to human sexuality and sexual function education, according to a 2024 survey. In a study of US obstetrics and gynecology resident physicians, most agreed that sexual health training was important, but fewer than half could describe disorders of sexual function or list medications that impact it.
Clinicians have limited educational opportunities for learning about sexual health. Sexual health and well-being are not adequately covered as part of a standard medical school education.
Meanwhile, the FDA has struggled to even define sexual problems in women and approved very few related products. Female anatomies and sexual experiences require unique exploration. We’re not saying there must be parity in the number of related products. But it’s worth asking why sexual function and pleasure are viewed by the medical field as a core element of men’s identity but not women’s. Perhaps it is because persistent cultural narratives suggest that women don’t care about sexual pleasure as much as men. However, there is growing evidence that women’s and men’s biological capacity for sexual response is comparable, and the gender gaps we see in sexual desire may have to do with how desire is conceptualized and measured. There is still so much to learn. This leads to another aspect of women’s sexual health: historically, compared to men’s, it has gotten significantly less attention from researchers. And these disadvantages in funding and research may translate to fewer promising biomedical approaches in the pipeline—a recent search of clinicaltrials.gov revealed 875 trials involving the terms “sexual dysfunction” and “males” compared to 487 for the same term and “females.”
Inadequate sexual health education
Jewel: When I was a resident, I had a fantastic mentor who blew my mind when she told me that women’s sexual medicine could be part of my future practice. I had six years of formal medical schooling under my belt before I learned that sexual health was a medical specialty. Once I determined the focus of my clinical and research career, I started to see gaps in my training and knowledge. I had learned so little about the vulva, clitoris, sex hormones, and sexual health in general that I had to seek out additional training in these areas. The lack of training and clinical guidance is, I feel, in part driven by a lack of research investment. Ultimately, I found mentorship through colleagues at the Mayo Clinic and training through organizations such as the International Society for the Study of Women’s Sexual Health, where I am now a fellow.
And yet, in 2024, every week, patients tell me the same things: “I didn’t know who to tell about this,” or “I thought this pain was normal and I had to live with it.” As a physician, I know that pain during sex is neither normal nor something that must be lived with—but clearly the news that FDA-approved treatments and other therapies are available has not made it into clinicians’ offices across the country. The medical establishment must do better by recognizing sexual health as important to a woman’s identity and general well-being, beyond its role in reproduction and cervical cancer screening.
The limited information and help for women’s sexual concerns inside doctors’ offices is mirrored, and likely intensified, in the culture outside it. Many states do not provide sex education in public schools. Of the 38 states (and the District of Columbia) that mandate sex or HIV education, information is inadequate at best. A recent report by the Guttmacher Institute found that less than half of adolescents reported being informed of where to get birth control before they had sex for the first time. The trend is not even headed in the right direction; the same report found that adolescents were less likely to report receiving sex education on key topics from 2015–2019 than they were in 1995.
The next generation deserves a better introduction to the wonderful world of consensual adult sex.
And when sex education occurs, it is not really about sexual health. The benefits of sexual health education for students, according to the Centers for Disease Control and Prevention, include a delay in first sexual intercourse, a reduction in the number of sexual partners, a decrease in unprotected sex, and improved academic performance. Yep, that’s right: better grades.
The subject of sexual functioning, including arousal and orgasms, is largely left to the free market—i.e., pornography. According to a recent study of 1,300 teens, the average age of their first online porn experience is 12. Of the 44% who reported seeking it out intentionally, almost half describe online pornography as “helpful information about sex.” Researchers don’t know if there is any correlation between the use of porn and any specific harm, but it’s fair to say the next generation deserves a better introduction to the wonderful world of consensual adult sex.
Considering the frame
Sara: When my older sister got her first period, my mother burst into angry and fearful tears. So when my period came, I sensibly never mentioned it to anyone. But by the time I became a mother, I had read every page of Our Bodies, Ourselves, seen my cervix with my own speculum (it was an ’80s thing), and worked at Planned Parenthood. My husband and I referred to genitals by their accurate names and strategically placed “progressive” sex education books around the house. We answered every sex question that came our way in a manner we would have called “sex-positive.” So I admit feeling a bit defensive when my 20-something daughter mentioned that she never got a real sex education. Even at school? I asked. “Oh, that was just about disease, violence, and reproduction. Most sex has nothing to do with disease, violence, or reproduction.”
She sums up the problem perfectly. Why had our “blue” state not made that clear? Meanwhile, my students at Georgetown University, the Jesuit institution where I teach gender policy, know all about the dangers of sex. Many received abstinence-only sex education; most report learning only about contraceptives and sexually transmitted diseases. Every year, one or two of my students report making a public virginity pledge, promising to keep themselves “clean for marriage.” In both my daughters’ progressive upbringing and my students’ more traditional ones, I see a common theme: sex is about avoiding catastrophe rather than feeling pleasure.
Legal theorist Katherine M. Franke points out that women’s sexuality is often framed as either a matter of dependency or danger, rendering their “actual experience of pleasure invisible.” But the scientific, medical, and educational establishments need to recognize that when women’s sexuality is largely invisible in exam rooms and classrooms, confusion and fear can fill the void. The recent Supreme Court decision that allows states to ban abortions launched a wave of fear about the role of government in sexual and reproductive health, adding to the narrative that sex is a potential disaster for women.
In both my daughters’ progressive upbringing and my students’ more traditional ones, I see a common theme: sex is about avoiding catastrophe rather than feeling pleasure.
But progressive policies concerning women’s sexuality also echo gloom and doom. The movement to address campus-level sexual violence has been profoundly important in preventing real and ongoing harm. But Title IX training programs about how not to get raped or how to monitor the sexual safety of others frame sex as a lurking disaster. It’s also a depressing way to start college. The #MeToo movement called out powerful abusers and achieved some much-deserved justice, but it was a national conversation about women’s sexual shame and pain. Being a survivor is better than being a victim, but neither would be better.
Today, the way women’s sexual health is framed affects how it is researched and treated. If sexual health is largely about averting disaster rather than enhancing women’s pleasure and well-being, women’s sexual experiences will remain invisible. This will not bring about the advances in research, treatment, medical training, and medical care necessary to avoid no-woman’s-land.
Reframing sexual health
Understanding how messy medical, cultural, and historical threads converge in bedrooms, exam rooms, and research labs is key to progress. Once the problem is understood, it can be reframed to recognize women’s bodily autonomy and capacity for joy. Rather than sweeping pain and dysfunction under the rug or waiting for pharmaceutical companies to come up with a molecule (and an advertising campaign) that frames sexual health as a problem to solve, the research community and medical establishment should begin reframing women’s sexual health as a subject for research and investment.
The process could start with creating a broad research roadmap, similar to the National Academies of Sciences, Engineering, and Medicine’s recent report Advancing Research on Chronic Conditions in Women. This model is particularly useful for three reasons. First, the issue is female-specific. Women cannot rely on the many decades of research on males; sex-specific biological research is key to understanding the unique sexual needs of women.
Women’s sexuality is steeped in gender biases and structural sexism; who defines sexual “problems,” who is included in the research, and how sexual distress may be monetized will determine whether women benefit from research initiatives.
Second, women’s sexual health, like chronic debilitating conditions, does not fit well into current medical disease models. Women’s sexual health problems are rarely about an acute disease of a single organ system. Sexual health is not simply gynecologic health; it can involve complex interactions of autoimmune, infectious, neurocognitive, musculoskeletal, and pain disorders—the same web of complexity relevant to understanding multiple chronic conditions.
And third, women’s sexual health, like chronic health conditions, is impossible to understand without a health equity lens. Women’s sexuality is steeped in gender biases and structural sexism; who defines sexual “problems,” who is included in the research, and how sexual distress may be monetized will determine whether women benefit from research initiatives. For both sexuality and chronic conditions, social factors do more than impact the problem; sometimes, they are the problem. Here, the model provided by the National Academies’ chronic conditions report could help navigate the complex scientific and political challenges that will inevitably come with exploring women’s sexual health, paying attention not only to the circumstances that contribute to sexual problems but also to the benefits of addressing them. This goes beyond decreasing disfunction, disease, and pregnancy; a sexual health agenda should focus on joyful well-being. This is not “just” a women’s problem—it could prove transformative for society at large.
Another important initiative should focus on training at medical regulatory organizations, medical schools, and medical associations. Curriculum standards need standardized sexual health and sexual well-being education. Clinical guidelines should systematically incorporate impacts on women’s sexual health. Funding for programs that train future sexual health clinicians could also help close access gaps. Similarly, assuring that insurers, including Medicare, raise compensation for physicians so that women’s health services are reimbursed at rates similar to men’s could both address disparities now and ensure more care in the future. Overall, when talking about sexual health, everyone in the medical, research, education, and policy spheres would do well to remember that it’s more than reproduction, disease, and violence. Sexual health matters because sexual health is health. Good health means women can show up for themselves, their families, their friends, and their community, which benefits all of us. But until sexual health is acknowledged as its own end goal, it will not be achieved.
Hildreth Meière’s Initial Drawing of “Air”
Hildreth Meière, Air, 1923, graphite on tracing paper, 7 x 9 inches. Gift of the Hildreth Meière Family Collection. Permanent collection of the National Academy of Sciences.
Art deco artist Hildreth Meière (pronounced me-AIR) created this preliminary study of Air for the National Academy of Sciences (NAS) building’s Great Hall, which opened in 1924. Depicted as an allegorical female figure wearing flowing garments surrounded by birds in flight, Air is one of the four classical elements—earth, air, fire, and water—represented in the Great Hall’s pendentives. Pendentives are supportive architectural elements that transition from the square corners of the floor below to the circular dome above. Meière’s finished work in the Great Hall includes three small medallions of each element, representing inventions related to that natural force. Air is depicted with a bellows, a sailboat, and a windmill.
Meière designed the iconography of the Great Hall dome, arches, and pendentives. This project was her first major commission, launching her forty-year career. Her work at the NAS building celebrates the history and significance of science, blending art nouveau and Greek and Egyptian influences with the art deco approach that became her trademark. Meière was one of the most renowned American muralists of the twentieth century. Working with leading architects of her day, she designed approximately 100 commissions in notable buildings, including Radio City Music Hall, One Wall Street, St. Bartholomew’s Church, Temple Emanu-El, and St. Patrick’s Cathedral in New York City, and the Nebraska State Capitol in Lincoln and the Cathedral Basilica of Saint Louis.
Preparing the Next Generation of Nuclear Engineers
In “Educating Engineers for a New Nuclear Age” (Issues, Summer 2024), Aditi Verma, Katie Snyder, and Shanna Daly’s vision closely aligns with recent sociotechnical advancements, particularly in the realm of artificial intelligence-powered simulations. Recent research has demonstrated the potential of virtual reality (VR), augmented reality (AR), and other immersive technologies to bridge the gap between technical knowledge and real-world application in engineering education. Studies indicate that VR and AR can significantly enhance spatial understanding and conceptual learning in complex engineering systems.
These technologies allow students to interact with virtual models of nuclear facilities, providing a safe and cost-effective way to gain hands-on experience. The simulations can adapt in real-time to student interactions, offering a more realistic and nuanced understanding of how technical decisions impact social and environmental factors. This fits perfectly with the authors’ goal of preparing engineers to collaborate effectively with communities and consider broader societal implications.
My recent work on modernizing education for the nuclear power industry underscores several key points that complement the authors’ vision. First is the need for rapid technological advancements in training methodologies to keep pace with industry evolution. The nuclear industry is facing a critical juncture where modernizing education and training is essential. The need for cost-effective approaches in training is paramount, especially with a projected increase in the number of nuclear plants and employees. This expansion necessitates scalable and efficient training methods that can accommodate a growing workforce while maintaining high standards of safety and competence.
The nuclear industry is facing a critical juncture where modernizing education and training is essential.
Second is the importance of addressing emerging demographic shifts and knowledge transfer challenges, and the critical role of fostering a continuous improvement culture within engineering education. As experienced professionals retire, there is an urgent need to transfer knowledge to the next generation of nuclear engineers and technicians. Interactive e-learning environments and mobile accessibility can facilitate this knowledge transfer, making it more engaging and accessible to younger professionals and directly supporting the goal of creating more empathetic and ethically engaged engineers.
Third is the critical need to foster a continuous improvement culture within engineering education. The changing work environment demands adaptable training solutions. The integration of VR and AR technologies in training programs can provide immersive, hands-on experiences even in remote learning settings. This approach enhances the learning experience and improves safety by allowing trainees to practice in risk-free virtual environments.
Even as cutting-edge technologies are reshaping training methodologies, offering a versatile tool kit to optimize effectiveness and stay at the forefront of industry standards, work remains. Key areas to explore include interactive learning approaches and e-learning environments, VR and AR simulations for immersive experiences, AI-powered simulations for realism and adaptability, precision learning technologies for enhanced effectiveness, personalized skill development paths and adaptive learning, gamification for engagement, dynamic learning analytics and predictive analytics for proactive enhancement, and natural language processing to enhance instant support.
By applying the lessons we’ve already learned and the knowledge future studies will certainly bring, and combining these advancements with the authors’ community-centered, ethically driven approach, we can truly prepare the next generation of nuclear engineers. This holistic approach to education and training will enhance the industry’s safety and efficiency and contribute to its long-term sustainability and public acceptance.
Olivia M. Blackmon
Director, ORAU Partnership for Nuclear Energy
Oak Ridge Associated Universities
Nuclear energy has been a “successful failure,” in the words of Vaclav Smil, a distinguished scholar at the University of Manitoba. Even though huge amounts of money and human resources have been invested in nuclear technology, its contribution to global power generation has been very modest and betrayed expectations. Global warming and energy security awareness after the Russian invasion of Ukraine give positive reinforcement to using nuclear power in some countries, and the International Energy Agency’s ambitious Net Zero Emission by 2050 scenario projects nuclear power generation to be double by 2050: to 67 exajoules (EJ) from the current 29 EJ. But renewable energy is growing much faster and is expected to play a much larger role for decarbonization, growing over that period to 306 EJ from 41 EJ—a seven-and-half-times increase.
It appears, then, that nuclear power may not be a mainstream of future energy. Why is this the case? Aditi Verma, Katie Snyder, and Shanna Daly give us an answer.
Even though huge amounts of money and human resources have been invested in nuclear technology, its contribution to global power generation has been very modest and betrayed expectations.
Current nuclear power is based on large light water reactors. It aims to achieve economy of scale by the size of reactor operating as base load. Nuclear power plants usually locate in remote areas to supply electricity to distant urban users through high-voltage power grid lines. This represents a large, centralized paradigm of energy supply. NIMBY—not in my backyard—often happens in local communities where residents feel forced to take more risks than benefits. Disastrous accidents such as occurred at Three Mile Island, Chernobyl, and Fukushima strengthen the arguments against nuclear power. Delays of construction have increased costs by double or triple, and the major commercial reactor builders Westinghouse, Toshiba, and Areva have essentially collapsed. Another issue is radioactive waste. Without concrete plans for adequate waste disposal sites, it is irresponsible to do nuclear power. Combined, this means the current light water reactor paradigm is not sociopolitically sustainable.
Locals have “rights” to be heard, Verma, Snyder, and Daly argue. They maintain that a hoped-for rise of small modular reactors that are better suited to local needs may provide a chance. Microsoft is considering using SMRs to power its data centers. Dow Chemical is considering a type of SMR for some of its production plants. Canada is considering SMRs for heat for remote mining. In Japan, melted-down radioactive debris left from the Fukushima accident can’t be transported away for treatment, and an innovative type of SMR developed at the Idaho National Laboratory appears capable of handling the material onsite. As with all types of reactors, however, the authors stress that local people and communities must be fully engaged, from design through implementation.
Fusion energy has long been a dream technology. It is expected to produce much less waste, with passive safety and no risk of weaponization. A new sustainable paradigm may come true by fusion. There are many small-scale commercial fusion reactors under development. The authors’ group of young engineers is working to produce innovative designs with local communities. I sincerely hope they recover “trust, respect, and justice” among locals, and bring a new future to nuclear power.
Nobuo Tanaka
Executive Director Emeritus, International Energy Agency
Chair of the Steering Committee of the Innovation for Cool Earth Forum
Aditi Verma, Katie Snyder, and Shanna Daly discuss their efforts to modernize the approach to teaching nuclear engineering, specifically moving beyond a singular focus on technical excellence to include engaging communities in participatory design and becoming fluent in ethical, equity-centered communication. Their work is timely as the international interest in deploying a new generation of commercial nuclear products has increased in response to both climate change and energy reliability concerns. Currently, many new commercial ventures look to deploy new nuclear systems in a much broader range of deployment scenarios beyond traditional gigawatt-scale electricity production.
The first commercial deployment of nuclear energy plateaued at supplying 20% of the US electricity demand. That plateau was associated with cost increases and also public pushback on further expanding the use of the technology. To successfully deploy additional nuclear energy in both traditional and new deployment scenarios (e.g., smaller plants closer to population centers, industrial heat uses, or direct off grid supply to data centers) will require better engagement and input from the communities that will ultimately decide on hosting nuclear technology. Placing that emphasis as part of the basic functions of engineering, as Verma, Snyder, and Daly propose, is a critical step.
To successfully deploy additional nuclear energy in both traditional and new deployment scenarios will require better engagement and input from the communities that will ultimately decide on hosting nuclear technology.
Their work is historically significant for the University of Michigan. In 1948, U-M initiated the Michigan Memorial Phoenix Project, a campus-wide initiative that paid tribute to the 579 students and faculty who lost their lives in World War II. The project aimed at understanding the peaceful civilian uses of atomic energy. As former U-M president and dean emeritus of engineering James Duderstadt described: “It is important to recognize just how bold this effort was. At the time, the program’s goals sounded highly idealistic. Atomic energy was under government monopoly, and appeared to be an extremely dangerous force with which to work on a college campus. This was the first university attempt in the world to explore the peaceful uses of atomic energy, at a time when much of the technology was still highly classified.”
Given U-M’s leadership in understanding the first generation of the civilian uses for nuclear technology, it is heartening to see a new generation of young scholars leading national discussions about future uses of the technology and how engineers should approach their field.
Todd Allen
Glenn F. and Gladys H. Knoll Department Chair of Nuclear Engineering and Radiological Sciences
Chihiro Kikuchi Collegiate Professor, Nuclear Engineering and Radiological Sciences
University of Michigan
Science Policy: No Longer an “Exotic Nice-to-Have Thing”
The community of people who do science policy has long been something of a cipher. In the late 1960s, journalist Dan Greenberg reported that it comprised a “remarkably small number of people”—he estimated between 200 and 1,000. In 1964, political scientist Robert C. Wood described the “consistently influential” science policymakers as “an apolitical elite” of just a few hundred people. Unlike, say, people who did trade policy or diplomacy, conventional wisdom held that those doing science policy occupied a separate, rarified category. Dual practitioners of science and of policy, they worked behind the scenes, meeting at cocktail parties for those in the know, rather than at conferences with the hoi polloi. If they had an allegiance, it was presumed to be to science and its funding more than any political party or particular industry.
By 2020, when I joined this magazine, science policy was all but invisible in the mainstream media.
But was this description ever true? By 1984, when Issues was founded, the community was certainly much larger, and pathways into the field, such as fellowships, were already established. Still, by 2020, when I joined this magazine, science policy was all but invisible in the mainstream media. This amazed me given the ubiquity of regulated technologies in our lives, the magnitude of legislation like the Bipartisan Infrastructure Act and CHIPS and Science Act, and the myriad decisions that influence taxpayers’ $210 billion annual investment in research and development.
When the Issues editorial team was deliberating how to celebrate the magazine’s fortieth anniversary, David May, the chief communications officer at the National Academies of Sciences, Engineering, and Medicine, suggested doing a survey of Issues’ readers. We loved the idea—it seemed like a new way to extend the magazine’s conversation. And it would offer a chance to get a baseline sense of who the community is today and how we can better serve it, and perhaps establish a regular survey.
Eager to find out how the science policy community defines itself, the editorial team brainstormed survey questions about who practitioners are, where they work, what they do day-to-day, what motivates them, and which current and future issues concern them most.
We were particularly aware of the different generations that would take the survey, in part because we have them in our staff. Associate editor Kelsey Schoenberg had taken notice of just how often Issues writers refer to the Congressional Office of Technology Assessment (OTA), even though it ceased to exist in 1995—before she was born. So we included her intentionally cheeky question: “On a scale of 1 to 100, how much to do you miss the OTA?”
And then, with the help of many friends and colleagues, we sent the survey out into the world.
Perhaps there are people who do science policy without realizing it.
Within a week or so, digital engagement editor Kim Quach got a note from John Andelin, once assistant director of the OTA, who wrote, “… on missing OTA. I’d have said 100, but shaved it a bit.” (Andelin explained his long and exciting career in a wonderful oral history with Caltech’s David Zierler in 2022.)
In his email, Andelin also raised a provocative thought: perhaps there are people who do science policy without realizing it. “In about 1972, while working on the Hill in a member’s office, I was asked to give a speech on ‘science policy.’ I mentioned it to a colleague, saying that I didn’t know anything about it. His response, after laughing, was that that’s what I did.” That, Andelin wrote, was when it dawned on him that helping the House’s Subcommittee on Energy and being an unofficial science advisor to a congressman counted as science policy. “I mention this,” he concluded, “because there may be fully involved science policy folks who might not label themselves that way and might not think to respond to your survey.”
In the end, 784 of you answered the survey, suggesting that there are thousands of people between 19 and 90 living the science policy life and identifying as such. I want to pause here to note that the survey took more than 14 minutes on average to complete, so this community has given up more than 180 cumulative hours of your precious time. Thank you. We appreciate your generosity, and we’re reading all the comments you left in the monster spreadsheet we compiled.
Today practitioners under 45 are increasingly likely to identify as women. And, although it was once considered elitist, more than half of respondents said the field has become more open and inclusive in recent years.
At this point, we don’t know whether our sample is representative of the larger community. But the data we have show—as former Issues editor Josh Trapani and our graduate student assistant Katherine Santos Pavez write in this issue’s Real Numbers—that the profession is expanding and evolving across many dimensions. Once largely a career for men, today practitioners under 45 are increasingly likely to identify as women. And, although it was once considered elitist, more than half of respondents said the field has become more open and inclusive in recent years. Interestingly, the youngest were most likely to agree, suggesting that this has been their direct experience.
Today, those in the field are far less likely to think of themselves as working for science (e.g., advocating for more research funding) than to see themselves as taking a wider societal role, such as influencing policy and regulation and bringing science to society. This reflects, I think, a growing awareness of science policy as an occupational identity, with some sense of mission and societal obligations.
In response to open-ended questions about their biggest concerns in science policy, an overwhelming number of people mentioned artificial intelligence and climate change. This makes sense, but I found the less common answers interesting as well. Fifty-two people mentioned aspects of space policy, including space mining, space exploration, satellite and debris management, and planetary defense from asteroids. Likewise, there were many mentions of brain science, research security, biosecurity, open science, misinformation, quantum computing, and information science. This list looks a lot like a table of contents for any edition of Issues—but I was surprised to find few mentions of antibiotic resistance, sustainable development goals, water, or solving community problems.
The answers display an introspection about what the field is trying to do, how it should conduct itself in the future, and the cost of failure.
The tensions of doing science and technology policy appeared in answers to our question about what topics would be of concern 40 years from now. On the whole, respondents foresee that tomorrow’s issues will be largely the same as today’s, but their reasons differ in ways that show the professions’ frustrations. For some, science’s challenge is evergreen. “The hot topic 40 years from now will be exactly the same as it is today: What innovations will generate more value from science?” one respondent wrote. But for other people, the topics will remain the same because society has failed to deal with the underlying problems. “The underinvestment and underutilization of American domestic STEM talent, especially people of color, is tied into broader societal struggles with racism, classism, elitism, sexism, etc., and there is no indication that these society-level issues will be resolved in the next 40 years,” someone else said. And for others, there is an awareness that science policy itself has failed to solve the problems it sought to address: “How did we end up in this state, and is there a path we could have taken 40 years ago to have ended up somewhere preferable?”
And so, with apologies to those who detested the question (and told us so), the answers display an introspection about what the field is trying to do, how it should conduct itself in the future, and the cost of failure.
In responding to a question about barriers for the field, some people took the opportunity to talk about their worries. For example, one respondent questioned whether science and technology policy practitioners are equipped for the job they do. “The S&T policy community is extraordinarily different than other federal policy communities in that it is still dominated by practitioners in the natural sciences who enter S&T policy jobs with no formal training in the field (myself included), and decisions about where to invest are dominated by expert opinion and rarely by policy analysis, given the paucity of systematic data about the ultimate uptake and use of research outputs.”
Similarly, another writer wrote that it’s time for science policy to abandon its exceptionalism. “By separating science and society, those looking to advance policy constantly have to lead with exposition on what science is, why it’s important, etc. This results in an elitist presenter/audience dynamic, which will hamper effective policy, due either to alienation or ignorance (often both). It may be a subtle idea, but ‘science policy’ should be seen as a flavor of public policy, just as ‘defense policy’ or ‘environmental policy’ or ‘fiscal policy’ are—not some exotic nice-to-have thing.”
The bigger and more interesting question is: What does this community want its future to look like?
Perhaps the route to becoming just another kind of policy can be found by bridging that divide between science and society. One respondent wrote that the role of science needs to be reframed: “The need is NOT of scientists in the public square; the need is for more public in the science square.” And another saw that the field needs to understand the role it already plays in society, “with the steady march of technology over the past forty years having steadily empowered individuals (and their ability to do societal harm)” requiring a reconsideration of the balance between individual rights and societal obligations. In a parallel vein, another writer called for a more active engagement with what science’s social goals are: “What kind of society do we want and how does science contribute to that?”
OK, you may be wondering, so how much did survey respondents miss the OTA? Out of 672 people who answered the question, 40% named numbers between 80 and 100. This suggests that the OTA is not forgotten, but it may be joining the pantheon of science policies past, alongside the Manhattan Project, the Apollo moon shot, SEMATECH, and Operation Warp Speed. The bigger and more interesting question is: What does this community want its future to look like?
A clean version of survey data is available for anyone who would like to work with it. Look for notifications about further discussions of the survey in our Friday newsletter. And we hope to do another survey in future years and welcome your input on what questions it should contain.
Who Does Science and Technology Policy?
To celebrate the magazine’s fortieth year of publication, the Issues team wanted a better understanding of who does science and technology policy, and what it means to even say you’re doing it. Between May 15 and August 15, 2024, Issues ran a survey to gain a deeper understanding of the field of science and technology (S&T) policy today, including career paths, motivations, activities, and opinions on how the field is changing.
The survey was disseminated in Issues’ newsletters and social media, and by partners at the National Academies of Sciences, Engineering, and Medicine; Arizona State University; FYI at the American Institute of Physics; the National Science Policy Network; and other organizations. This method of dissemination may have introduced bias in who responded to the survey—survey-takers reached via science policy networks are already likely to self-identify as members of the S&T policy community—but 784 people in the field took the time to fill it out. To keep the survey brief, most terms used were not defined, which may have encouraged subjective interpretation. It’s also important to note that the differences described were not tested for statistical significance, and that some results may not sum due to rounding.
Who does science and technology policy?
Fifty-five percent of survey respondents identified as male and 43% as female. The remaining 2% identified as nonbinary or preferred not to identify gender. These results are closer to the gender divide across the entire US civilian labor force (53% male, 47% female) than to the representation of women in all STEM occupations (only 35% in 2021). Respondents ranged in age from late teens to their 90s. Women were increasingly represented in younger age groups, comprising only 16% of respondents 65 years old or older, but 64% of those less than 45 years old (Figure 1). This mirrors trends in the overall college-educated labor force, where women now outnumber men.
Women were increasingly represented in younger age groups, comprising only 16% of respondents 65 years old or older, but 64% of those less than 45 years old.
Survey respondents hailed from 40 different countries on all continents except Antarctica, though the vast majority (83%) were from the United States. Within the United States, respondents came from 46 states and the District of Columbia (Figure 2).
The majority of survey respondents (71%) indicated science and/or technology policy to be their part- or full-time job. Nearly three-quarters of respondents reported that they work in one of three sectors: the federal government (27%), university (27%), or nonprofit (18%) (Figure 3). Women made up close to half of the respondents in each of these sectors but were underrepresented in others (Figure 4). Age distribution varied relatively little across sectors, although respondents from industry tended to be older than respondents from other sectors.
What does a science and technology policy career look like?
There are many pathways into science policy careers—as evident from the stories told by guests on Issues’ podcast series Science Policy IRL. Among survey respondents, the most common pathways were through “university research” and “working at a federal or state agency.” But career pathways varied by age (Figure 5). Younger S&T policy professionals were more likely to have earned degrees in science policy. They were also more likely to have started their involvement through a fellowship or engagement with advocacy networks. Older professionals were more likely to have become involved through their work, whether at federal or state agencies, think tanks, or jobs on Capitol Hill. This may indicate an increasing difficulty entering the field directly through work without formal credentials or nonemployment experience. It is worth noting that respondents were able to select multiple options for this question, so further analysis of the combinations survey-takers chose could reveal other patterns.
In terms of what science and technology policy careers entail, respondents on average indicated between four and five different activities constituting their typical work. The two most frequently selected typical work activities were—unsurprisingly—“communicating science for policy” and “policy analysis” (Figure 6). Work activities differed between sectors (Table 1), with “policy analysis” the only task that appeared in the top five work activities across all the selected sectors. “Scientific research” and “education” dominated the university sector responses, while “consulting” was important in industry and think tanks. A few activities, like “agency coordination,” “convening,” and “management,” appeared in the top five for only one of the sectors. Variation in the selection of typical work activities based on sector of employment suggests that different types of S&T policy jobs emphasize different activities and require specific combinations of skills. People who say they “do S&T policy” may in fact be spending their days in quite different ways in diverse roles that fill specific niches within the S&T policy ecosystem.
How is the field changing?
Most respondents identified their main motivations to work in S&T policy as “influencing policy and regulation” and “bringing science to society” (Figure 7). The first answer squares with a more traditional interpretation of what it means to work in public policy (in any field), and the second shows a somewhat deeper engagement with the evolution of the relationship between science and society—an ongoing conversation in Issues’ pages over the past four decades. These motivations differed little by gender and age, suggesting relatively common objectives across the field.
Most respondents identified their main motivations to work in S&T policy as “influencing policy and regulation” and “bringing science to society.”
When asked to reflect on how S&T policy has changed in the last five years, well over half of respondents agreed the field has exhibited substantial growth and become more internationally connected (Figure 8). Just over half (51%) agreed the field has become more open and inclusive. Interestingly, there was very little difference between men and women in their answers to this question. Perhaps counterintuitively, the youngest age group (18–34 years) was more likely to strongly agree or agree (56%) that the field has become more open and inclusive than the oldest age group (75-plus years: 47% strongly agree or agree). This relative optimism could indicate a positive shift in confidence about future opportunities in the field.
Most respondents (52%) were neutral on whether the field has shifted to a more decentralized and regional focus. However, respondents from think tanks and state government were more likely to agree that the field has become more decentralized, which suggests varying perceptions about S&T policy debates and priorities depending on where an individual is situated within the S&T enterprise (Figure 9).
What’s next for S&T policy?
When Issues asked a series of open-ended questions on the future of the field, it became evident that the community is grappling with multigenerational issues. When asked which areas of S&T policy are emerging and what will preoccupy the field 40 years from now, responses focusing on artificial intelligence (and its governance, regulation, societal impacts, interdisciplinarity, privacy issues, and ethical concerns) and climate change topped both lists. However, a fair number of respondents chose not to speculate on future issues, with many answering with some version of “I have no idea.” Scientists—and by extension, many science policy professionals—do not like to guess.
The themes that emerged from the answers to the final question, about barriers standing in the way of effective S&T policy, will be familiar to Issues readers: political and ideological challenges; lack of funding and resources; communication gaps and public engagement issues; bureaucratic and institutional hurdles; lack of scientific literacy and expertise in policymaking; entrenched interests, elitism, and lack of diversity and inclusion; and failure to reward and recognize contributions to the field. The significance of this range of responses was captured in a single answer from a survey-taker who called out the survey designers for asking the wrong question, noting that the community respects “many meaningfully different definitions of what is effective” in science policy. A better understanding of who is doing what—and why—is key to maintaining the vitality of the increasingly large, diverse, and important field of S&T policy.
Fierce Planets
Gail J. Higenell, Earth Reframed: The Seen and the Unseen, 2023, commercial and hand-dyed cotton fabrics, heat-manipulated materials, embroidery floss, and a frame built from repurposed pine wood, 29 x 44.5 inches.
Iron snow, helium rain, and diamond icebergs might sound like science fiction, but they are real phenomena occurring within planets due to extreme heat and pressure. In their recent book, What’s Hidden Inside Planets?, planetary scientist Sabine Stanley and science journalist John Wenz guide readers through the enigmatic realms beneath planetary surfaces. They delve into how the interiors of Earth and other planets are intricately linked to the formation and regulation of atmospheres, oceans, earthquakes, and volcanoes. The book and Stanley’s research into these powerful forces inspired the artwork in the traveling exhibition Fierce Planets.
The juried exhibition is a collaboration between Studio Art Quilt Associates and the Johns Hopkins Wavelengths science communication program. Fierce Planets brings together work by artists from around the globe, each interpreting the mysteries of planets and space through fiber art. Their creations range from traditional quilts to fabric assemblages and soft sculptures, all inspired by Stanley’s research. Out of nearly 200 works submitted, 42 were selected to be part of the exhibition.
Fierce Planets isn’t just about aesthetics; it’s about fostering a deeper understanding of and connection to the universe, inviting viewers to explore the beauty and complexity of planets through a unique lens.
Kathleen Decker, Marsquake!, 2023, cotton fabric, cotton batting, photos printed on fabric (image credit: JPL/NASA), cotton, silk, and polyester quilting thread, 25.5 x 29 inches.
Carolina Oneto, Imaginary Places IV, 2023, cotton fabrics, cotton batting, threads for piecing and quilting, 56 x 55 inches.
Anne Bellas, Soleil et Lunes (Sun and Moons), 2020, hand dyed, printed, machine pieced, machine quilted, cotton sateen, commercial fabrics, 46 x 36 inches.
Dolores Miller, Pale Blue Dot 2, 2023, commercial cottons, polyester sheers and netting, polyester and hologram mylar threads, metallic foil, 19 x 36 inches.DIANNE FIRTH, Saturn Observed, 2023, wool batting, polyester net, polyester thread, 29.5 x 29.5 inches.MIEKO WASHIO, Cosmos, 2020, appliquéd, machine quilted, hand quilted, hand embroidered, cotton, satin, yarn, 64 x 59 inches.Shin-hee Chin, Cosmic Threads: Connections of Neurons and Galaxies, 2023, recycled blankets, perle cotton threads, polyester threads, 21.5 x 43 inches.LEFT: Geneviève Attinger, The Improbable Modeling, 2023; cotton; plastic; cardboard (book cover inner frame); cotton strap with metal buckle; metallic, polyester, and cotton embroidery threads; 14.5 x 14 x 14 inches. RIGHT: Lisa Flowers Ross, Titan Swath, 2023, fabrics hand-dyed by the artist, yarn, thread, 64 x 26.5 inches.
Claire Passmore, Hot Stuff, 2023, cotton, silk, velvet, felt, tulle, nonwoven fabrics, threads, wool, continuous zip, pvc, fiber filling, dye, acrylic and enamel paint, plastic bottles, gold leaf, mica powder, glue, steel wire, 92.5 x 53 x 49 inches.