Hello folks. There will be no end-of-week roundup this week because I am otherwise occupied spending too much time enjoying the Madrid sun. Please instead accept this essay, and, as always, let me know what you think by commenting below or emailing me at hp464@cam.ac.uk.
Last time around I wrote about public opinion and AI. Specifically, to what extent we can trust polls to be representative of public opinion and whether developers and policymakers ought to use public opinion to guide the development of AI governance measures and mechanisms. I argued that the answer to the first question is ‘partly’ due to a combination of factors like question wording and order, sample size, how views tend to change over time, oversimplification, and what pollsters like to call ‘social desirability bias’.
To see what I mean, consider a recent survey from the Harris Poll that found that 48% of respondents were worried that machines could ‘turn on humans'. We might hypothesise that the gap between this figure and the results from the AI Policy Institute (AIPI) poll I discussed in the last essay, which reported that 83% believe AI could accidentally cause a catastrophic event, was influenced by a combination of social desirability bias (in this case, not wanting to overplay risks that sound a bit sci-fi) and by tighter questions related to the nature of said ‘catastrophic’ event.
Perhaps more importantly, though, is that the framing in the AIPI poll uses five different options, which include 'somewhat likely', 'extremely likely', and 'very likely' with respect to whether AI 'could' cause a (non-specific) catastrophic event. The point is ultimately that question composition—in this case, giving a number of slightly different options and adding them together—directly influences the sort of answer that we get.
Reading through the results got me wondering about why public opinion is so shaky, and what exactly makes the public consciousness (if such a thing exists) tick. Because polls are such tricky beasts, I wanted to write about the role that a decidedly more qualitative style of analysis can play in helping us to understand what people think about science and technology.
Rather than looking at the measurement and utility of public opinion, this week’s post is going to focus on the role of popular culture in shaping perceptions of science and technology. At the risk of agitating anyone weary of drawing yet another parallel between AI and nuclear technology, I’m going to look at support for (and opposition against) nuclear technology between the 1950s and 1980s.
The motivation for doing so is that the anti-nuclear movement proved to be particularly influential, resulting in moves away from nuclear power despite governments the world over heralding it as an extraordinary source of energy that would usher in an era of prosperity. For reasons that I’ll discuss in a moment, that didn’t happen.
For this post, I’ll look at two major moments: the release of the 1979 film The China Syndrome amidst the accident at Three Mile Island and the disaster at the Chernobyl nuclear power plant in Ukraine in 1982. I’ll introduce a few conceptual tools for thinking about the public sentiment, technology, and the future and make connections with AI safety to round things off.
Understanding the future
Before that, though, I am going to start with the introduction and development of nuclear technology in the 1950s. I’ve written about the history of the nuclear technology through the lens of the IAEA, for those who’d like to know about the political environment surrounding calls for international governance, but for our purposes, core to understanding the introduction of nuclear technology is Atoms for Peace. The term refers to a landmark speech given by President Eisenhower in 1953, as well as to a broader initiative including measures such as the declassification of nuclear power information and the commercialisation of atomic energy.
The US State Department Panel of Consultants on Disarmament, which included Robert Oppenheimer and affiliates from academic and civic groups, released a report on nuclear protocols in January 1953. The body suggested that the US government embrace a strategy of enhanced transparency with the American public concerning the potentials and the accompanying hazards tied to nuclear technology proliferation. Explicitly, the report (p.43) recommended "a policy of candor toward the American people—and at least equally toward its own elected representatives and responsible officials—in presenting the meaning of the arms race." The reasoning presented by the panel emphasised that, subsequent to the Soviet Union developing its own nuclear capabilities, there would not be a situation in which the US could preserve its monopoly on nuclear weapons. In other words, the nuclear age was defined by mutually assured destruction and the promise of abundant energy.
Accepting the recommendations of the report, US President Dwight D. Eisenhower gave a speech in 1953 to the United Nations General Assembly in which he proposed the creation of an international body to regulate and promote the peaceful use of nuclear power. The speech, which was known as the Atoms for Peace address, attempted to balance fears of preventing nuclear proliferation with hopes for the peaceful use of uranium in nuclear reactors:
The governments principally involved, to the extent permitted by elementary prudence, should begin now and continue to make joint contributions from their stockpiles of normal uranium and fissionable materials to an international atomic energy agency. We would expect that such an agency would be set up under the aegis of the United Nations…The more important responsibility of this atomic energy agency would be to devise methods whereby this fissionable material would be allocated to serve the peaceful pursuits of mankind.
Central to the introduction of nuclear power were conceptions of the future: the promise of limitless clean energy for the home, the creation of new technologies, and new ways to travel were all defining characteristics of the nuclear age. Exemplified by the Atoms for Peace address, a constellation of government announcements, news reports, and entertainment media heralded nuclear power as a revolutionary new technology throughout the 1950s. Lewis Strauss, chairman of the Atomic Energy Commission, famously said of the introduction of atomic energy: "It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter.”
Shaping the future is ultimately about influencing the present. Futures are, according to Nik Brown and Brian Rappert, generated through an “unstable field of language, practice and materiality” in which actors and agents compete for the right to represent progress and deterioration. In science, researchers argue that visions of potential futures are deployed to stimulate a desire to see potential technologies become realities.
In one of the canonical texts and about science and futurity, Science and Technology Studies scholars Sheila Jasanoff and Sang-Hyun Kim used the Atoms for Peace programme as an organising principle around which to develop the concept of the ‘sociotechnical imaginary’. They argued that such imaginaries exist as collective narratives and visions of social life that are shaped by technological and scientific advancements. Rather than existing as individual creations, they are described as collectively imagined forms of social life and social order that are attained through, and supportive of, advances in science and technology.
Jasanoff argues that the state creates imaginaries to shore up its position as a champion of science and technology, and in turn, to harness landmark national projects to reinforce conceptions of nationhood by describing what the future could look like and prescribing what the future should look like. The basic idea is that imaginaries permeate the sphere of public policy, reconfiguring the socioeconomic environment by determining which citizens are included—and which are excluded—from the benefits of technoscientific development.
With respect to Atoms for Peace, they made the case that the programme propagated imaginaries wherein nuclear technology was not just a tool for mass destruction (as seen in the bombings of Hiroshima and Nagasaki), but an essential technology for the progress of humanity.
Amidst positive visions of the future premised on the use of nuclear technology, the commercialisation of the technology quickly picked up steam globally. In the USA, efforts were led by companies like Westinghouse and General Electric, which pioneered the development and operation of various reactor types from the 1960s onwards. Further north, Canada created reactors that utilised natural uranium and heavy water, commencing operations in 1962.
During the same period, France initially experimented with so-called ‘gas-graphite’ reactor designs before firmly adopting cost-effective standardised pressurised water reactors (PWRs), a strategy that marked commercial operations since 1959. Meanwhile, the Soviet Union initiated its nuclear power programme in 1964, inaugurating plants with unique designs, including the world's first large high-power channel reactor in 1973 and expanding its nuclear footprint with new reactor types.
Unnatural disasters
The history of public attitudes towards nuclear technology (and of the technology itself) is long and complex, so I am necessarily forced to leave a lot on the table. To cut a long story short, experts generally agree that the initial speedy expansion in the 1960s was slowed by increasing competition from cheaper alternatives, including natural gas, and by major nuclear disasters that damaged trust in the technology. While I do not disagree with this analysis, I do suspect that part of the problem was that nuclear energy had an image problem. It’s potential to inspire hope could not be disentangled from its potential to generate fear.
That was the environment in which The China Syndrome, a popular 1970s film starring Michael Douglas and Jane Fonda was released. In the film, a reporter finds herself being deliberately embroiled in a conspiracy by officers bent on keeping secret an accident at a nuclear plant. The film’s name refers to a theoretical nuclear meltdown scenario where nuclear reactor components melt down through the containment structures and into the Earth, with the (somewhat unrealistic) idea being that the material would melt ‘all the way to China’.
“If the core is exposed for whatever reason, the fuel heats beyond core heat tolerance in a matter of minutes. Nothing can stop it. And it melts down right through the bottom of the plant, theoretically to China.” — Dr. Lowell, The China Syndrome
In reality, though, a nuclear meltdown would not result in a reactor core melting through the Earth's crust (though it would result in a highly damaging release of radioactive materials into the surrounding environment). Nonetheless, the film was successful in gripping the public imagination by popularising safety issues associated with nuclear technology.
Now, The China Syndrome didn’t shift attitudes in a vacuum. What was notable, however, was the proximity of the film’s release on 16 March 1979 to the Three Mile Island incident on 28 March in the same year. The accident, a partial meltdown of the Three Mile Island, Unit 2 (TMI-2) reactor on the Susquehanna River near the Pennsylvania capital of Harrisburg, is generally considered to be worst accident in U.S. commercial nuclear power plant history. Reflecting on the relationship between The China Syndrome and the disaster, the historian Elisabeth Roehrlich explained: “The accident received international attention, not least because it occurred 10 days after The China Syndrome premiered in cinemas. The movie’s plot of a reporter witnessing the failure of a nuclear power plant and becoming entangled in a subsequent cover-up had surprising similarities to the real-life accident.”
Futures aren't just about far-flung science fiction scenarios. While I am not making the case that The China Syndrome was the most significant factor in changing public attitudes towards nuclear energy, its collision with the Three Mile Island incident offers a window into the way in which actors can describe a particular future in which the relentless pursuit of nuclear technology could potentially spiral into uncontrollable disasters complete with cover-ups and conspiracies.
On the seven-point International Nuclear Event Scale, the Three Mile Island incident is rated as Level 5 or as an Accident with Wider Consequences. By way of some comparison, there have been two Level 7 accidents to date: the 2011 Fukushima nuclear disaster, and the Chernobyl disaster in April 1986.
As lots of you will know, a sudden surge in power levels led to a series of explosions in reactor number four during a safety test at the Chernobyl Nuclear Power Plant in Ukraine. The explosions released a significant amount of radioactive particles into the atmosphere, which spread over large parts of Europe and resulted in casualties amongst the plant workers as well as long-term adverse health effects and environmental contamination. The surrounding area, including the city of Pripyat, remains largely uninhabited due to high radiation levels to this day. The disaster raised serious concerns about the safety of nuclear power plants worldwide, leading to policy changes and increased scrutiny on nuclear safety measures.
The ramifications were felt immediately. Ten weeks after Chernobyl, the German terrorist group known as the Red Army Faction assassinated Karl Heinz Beckurts, the West German nuclear industry manager and former member of the IAEA’s Scientific Advisory Committee (SAC). In neighbouring France, meanwhile, French diplomat Gérard Errera warned against putting “nuclear energy itself on trial.” Speaking on behalf of the pro-nuclear government in Paris—which by this point was building a fleet of nuclear power stations—he argued against confusing nuclear energy “with an inexorable evil which could not be controlled by human hand.”
In the aftermath of the disaster, the International Atomic Energy Agency moved to calm publics around the world. After a review meeting following the accident with Russian officials, IAEA officials attempted to downplay the disaster at Chernobyl. For example, a senior official spoke to a journalist from France’s Le Monde magazine in which he said that even if there was a Chernobyl-like accident every year, he would still consider nuclear energy an “interesting” source of power.
In the weeks and months following the Chernobyl incident, “a nuclear accident anywhere is a nuclear accident everywhere” was an expression that became a slogan for efforts to improve nuclear safety worldwide (and allegedly, in the offices of the IAEA). It aimed to underscore that any nuclear accident, including like that which had occurred in Ukraine, had implications for the nuclear energy discourse around the world.
But the IAEA wasn’t the only party commenting on the disaster at the international level. The Soviet Union had an interest in pushing an interpretation that the careless action of plant operators had been responsible for the accident. This position proved to be a difficult one to maintain given Soviet authorities knew early on of the dangerous flaws in the reactor design. In a 1993 essay for Foreign Affairs, Russian nuclear physicist Sergei P. Kapitzka made the case that “from its inception the Soviet nuclear effort, aimed primarily at making an atomic bomb, was carried out with utter disregard for human life.”
Meanwhile, US officials held Soviet technology responsible. The calculation made was that, had the accident had been the result of simple human error, it could happen in the United States (or indeed anywhere else in the world). While the supporters of nuclear power attributed the accident in Ukraine to what they saw as inferior Soviet technology and management, the 2011 disaster at the Fukushima nuclear power plant in Japan demonstrated that even running the most technically mature nuclear reactors is not without risk.
Wrapping up
Nuclear power was once seen as the ultimate energy source. The early 1950s were defined by utopian visions of energy abundance and unbounded prosperity fuelled by cheap nuclear electricity and an accelerating pace of technological advancement around the world. But seventy years on, nuclear power only supplies around 10% of global electricity.
For those of us interested in AI safety, governance or public policy, the story of nuclear energy is an instructive one. The point is ultimately that we ought to pay attention to visions of the future of AI not for their ability to accurately forecast but instead for their persuasive potential. Perhaps more importantly is that nuclear technology shows that social, not technical, factors determine the adoption of new technology.
Of course, the comparison isn’t perfect. AI has had no major disaster in the same vein as Chernobyl or Three Mile Island. Where nuclear technology had an international standard bearer to defend it, AI does not. And commercial AI leads the public sector, while the relationship between the state and corporations was more equitable in the case of nuclear power.
But there are similarities, too. Few like to admit it—probably, I suspect, because it feels unscholarly—but science fiction has done more to shape public perceptions of AI than almost any other single factor. This is neither good nor bad. Sometimes AI is depicted in highly flattering ways; on other occasions, as dangerous or corrupting.
I will write more about how these depictions shape AI development, but for now, my overarching argument is a simple one: all the technical, organisational, or industrial gusto in the world doesn’t matter much if there is insufficient public will to adopt a technology. Even the logic of profit wasn’t strong enough to overcome opposition to nuclear energy in the wake of the Chernobyl—and later, Fukushima—crises.
For nuclear power, fear outweighed promise as time went on. What is the house of the future compared to nuclear disaster? What is a nuclear-powered car weighed against the atomic bomb? The relationship between awe and anxiety is an asymmetrical one. Years of efforts to widen the boundaries in which public policy can take place can be undone by a handful of moments to wrest back control of the future.
Another great post. I love the narrative, analytic format. I also love the exploration of these historical analogies. We educators beat the calculator-AI analogy to death early on in 2023 and got a lot of conceptual utility out of it. You are right --- predictions do have real force on the ground. Predictions have affective force in particular. Can't wait for the analysis of science fiction. Have you read McEwan's Machines Like Me? One of the more interesting AI sci-fi books in recent year. Be well.