GCR Policy Newsletter (5 September 2022)
Nuclear winter, catastrophic climate change and AI governance
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.

GCR in the media
“Two British volcanologists have expressed their concern about a potential volcano eruption that could devastate the civilisation. According to the researchers, the planet must be prepared for events that could break supply chains and cause famine, as well as multi-billion dollar losses, equivalent to those caused by the COVID-19 pandemic.” The end of the world? Volcanologists warn of eruption risk that would devastate global civilisation
“Jason Matheny had a short list of priorities as he started work as RAND's new president and CEO earlier this summer. Strengthen the competitiveness of democracies for the 21st century. Prevent technological disaster….He hopes to be remembered for one thing…: ‘reducing the risk of human extinction by .00000001 percent or greater,’ he said. ‘Hopefully greater.’” The Future Could Be Brilliant': RAND's CEO Is an 'Apocaloptimist’
“What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present. I think three things could help here. First, regulators and politicians need to get up to speed….Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks….Third, the news media needs to do a better job of explaining A.I. progress to nonexperts.” We Need to Talk About How Good A.I. Is Getting
“[The existential risk] that’s on my mind the most, mostly because I think it’s the one where we actually have a real chance the move the needle on in a positive direction, or more specifically, stop some bad things from happening…is biorisks.” Liv Boeree: Poker, Game Theory, AI, Simulation, Aliens & Existential Risk, Lex Fridman Podcast #314
Latest policy-relevant research
Avoiding nuclear winter and other sun-reducing catastrophes
Recent nuclear winter research shows that a “limited” or “regional” nuclear war could be more dangerous than previously thought. A war in which less than 5 per cent of the world’s nuclear weapons were detonated would lead to the deaths of hundreds of millions, perhaps even billions, according to a report by International Physicians for the Prevention of Nuclear War (IPPNW). (16 August 2022)
Policy comment: This report is a useful primer for policy-makers on the catastrophic impacts of limited nuclear weapons exchange by summarizing the latest research on the issue. For the IPPNW, preventing the use of nuclear weapons is critical, including removal of the launch-on-warning (or hair trigger alert). It lends its support to the United Nations Treaty on the Prohibition of Nuclear Weapons (TPNW) as the legal and moral foundation for the eradication of nuclear weapons. Policy advocates might also look to other policy mechanisms for reducing the likelihood of a nuclear war starting or escalating outside of long-term disarmament efforts.
Some island nations in the Southern Hemisphere, such as Australia and New Zealand, might survive a severe sun-reducing catastrophe, like a nuclear winter. But New Zealand is more vulnerable than previous research suggests due to its reliance on international trade, precarious aspects of its energy supply, and shortcomings in manufacturing of essential components, according to a research article preprint by Matt Boyd and Nick Wilson. (5 August 2022)
Policy comment: Boyd and Wilson’s findings show that Australia and New Zealand, among other refuges, need to acknowledge their potential role in a global catastrophe and look for ways to build resilience across a set of scenarios. The authors suggest that New Zealand should update its 1980s Nuclear Impacts Study and that resilience analysis and efforts should be overseen by a central entity such as a commissioner for extreme risks. A massive shift in nuclear policy in Australia or New Zealand is unlikely in the next decade. But, as a starting point, policy advocates might want to consider engaging their respective defence departments and emergency management agencies to understand their current thinking on resilience to nuclear winter and other major disruptions to food and energy supplies.
Grappling with catastrophic climate change
Our understanding of the worst case scenarios for climate change is poor, according to Kemp et al. A research agenda for catastrophic climate change is needed, focusing on four key strands: understanding extreme climate change dynamics and impacts in the long term; exploring climate-triggered pathways to mass morbidity and mortality; investigating social fragility, such as vulnerabilities, risk cascades, and risk responses; and synthesizing the research findings into integrated catastrophe assessments. (1 August 2022)
Policy comment: The authors’ proposed research agenda could help drive improved policy responses. Understanding worst-case scenarios could help galvanize action; as the authors note, nuclear winter research in the 1980s increased public concern and helped drive disarmament efforts at the height of the Cold War. The results of the research agenda would also greatly inform policy responses. It would help policy-makers understand the systemic vulnerabilities where greater resilience is needed. Governments themselves could conduct this research; the authors suggest a special report by the Intergovernmental Panel on Climate Change (IPCC), which has not yet given catastrophic climate change focused attention.
Society is missing the psychological infrastructure needed to manage the potentially existential threats of climate change, according to Devin Guthrie, a clinical psychology PhD Student at Texas A&M University. Catastrophic climate change - or ‘eco-apocalypse’ - has also subsumed the threat of nuclear war. This might be because people feel they have more agency over a nuclear apocalypse, and the climate crisis seems more frightening. The paper looks to psychological frameworks around grief and acceptance of death to inform responses for existential risks. (15 August 2022)
Policy comment: Governments, as well as researchers, should be careful when they communicate about existential risk because, rather than catalyzing action, the public might respond with avoidance or apathy. Engaging psychologists, bereavement counselors and hospice workers might provide lessons for how policy-makers and the field discusses existential risk with the wider public.
The challenges of governing AI domestically, regionally and internationally
Short-term harms from extant AI systems may magnify, complicate, or exacerbate other existential risks, over and above the harms they are inflicting on present society, according to conference paper by Benjamin Bucknall and Shiri Dori-Hacohen. Apart from the risk of advanced AI, current and near-term AI could increase existential risk because of its impacts on power dynamics between states, corporations and citizens, and the transfer and access to information. (27 July 2022)
Policy comment: Governments will need to grapple with how AI will shape their relationship with each other and with corporations and citizens. The shift in power dynamics will have implications for how to govern and regulate, how to carefully exercise power and how to build consensus around common challenges. It’s not clear that any government is ready for the tectonic shift that AI represents over the coming decades.
As part of its ambition to be a global AI superpower by 2030, China is keen to play a significant role in the emerging global AI governance, but it is very difficult for China to realize its AI leadership ambition at the current stage, according to an academic paper by Jing Cheng and Jinghan Zeng. In order to achieve this ambition, China has taken a wide range of domestic and international efforts to prepare for its leadership. China’s moves are driven by not only pragmatic governance needs but also the desire to be a norm-shaper, if not maker, in the future global AI order. (8 August 2022)
Policy comment: As the article states, global governance on AI is going to be a very challenging area due to the technological competition and ideological differences between the US and China. Governance could cover a range of issues and applications, including privacy, ethics and risks. Although the major powers may disagree on the first of these two, risks could be an area of cooperation. However, China’s views on the risks of AI remain unclear.
Policy database updates
The GCR Policy team has collected a database of policy ideas put forward by the field of existential and global catastrophic risk studies. It currently contains over 800 ideas, of which over 300 are listed publicly here. We will continue to update the database with the latest research and policy ideas. Contact us for more information or access to the full database.
This newsletter is prepared by the team of www.GCRpolicy.com. Subscribe for twice-monthly updates on the latest policy news and research on global catastrophic risk.