GCR Policy Newsletter (14 November 2022)
Security implications of AI, resilience to natural risks and worst-case climate scenarios
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
Policy efforts on GCR
In the Consolidated Appropriations Act, 2022, the White House Office of Science and Technology Policy (OSTP) was directed by Congress to develop a five-year “scientific assessment of solar and other rapid climate interventions in the context of near-term climate risks and hazards”. In a Submission of Evidence, existential risk researchers Gideon Futerman, Goodwin Gibbons and S. J. Beard state: “This research programme into climate intervention must include considerations of low probability / high impact scenarios as well as the most probable. Whilst easily overlooked, they are essential for a more complete risk response. There are complex interactions, both positive and negative, between solar climate intervention (SCI) and the potentially small but nonetheless significant risk of ruinous damage to human wellbeing. It will be essential to assess both how each SCI technology increases this risk and could reduce it, and how further research could be done to enhance the latter without adding to the former. These must be some of the goals of the proposed scientific assessment.”
GCR in the media
“My list of Top 10 Existential Worries (a list I just typed up at the urging of an editor, and which I present simply as a discussion tool): 10. Solar storm or gamma-ray burst. 9. Supervolcano eruption. 8. Asteroid impact. 7. Naturally emergent, or maliciously engineered, pandemic plant pathogen affecting staple crops. 6. Naturally emergent, or maliciously engineered, pandemic human pathogen. 5. Orwellian dystopia. Totalitarianism. Endless war paraded as peace. The human spirit crushed. Not a world you’d want to live in. 4. Cascading technological failures due to cyberattack, reckless development of artificial intelligence and/or some other example of complex systems failing in complex ways. 3. Nuclear war (may jump soon to No. 1). 2. Environmental catastrophe from climate change and other desecrations of the natural world. 1. Threat X. The unknown unknown. Something dreadful but not even imagined. The creature that lives under the bed.” Asteroids! Solar Storms! Nukes! Climate Calamity! Killer Robots! A guide to contemporary doomsday scenarios — from the threats you know about to the ones you never think of
“Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.” A simple point about existential risk
“The destruction of Winston-Salem was the story line of the fourth Planetary Defense Tabletop Exercise, run by NASA’s Planetary Defense Coordination Office. The exercise was a simulation where academics, scientists and government officials gathered to practice how the United States would respond to a real planet-threatening asteroid. Held February 23–24, participants were both virtual and in-person, hailing from Washington D.C., the Johns Hopkins Applied Physics Lab (APL) campus in Laurel, Md., Raleigh and Winston-Salem, N.C. The exercise included more than 200 participants from 16 different federal, state and local organizations. On August 5, the final report came out, and the message was stark: humanity is not yet ready to meet this threat.” NASA Asteroid Threat Practice Drill Shows We’re Not Ready
“In this conversation, we explore why it is now imperative to figure out a whole new world system given the catastrophic risk landscape that we confront. Daniel argues that in the face of exponential curves proliferating across systems – human, technological and geophysical – we need to develop a novel set of solutions for how we coordinate at scale. The task ahead of us is nothing less than to foster a global social, technological and educational zeitgeist, one which can prevent existential risk in a way commensurate to our deepest values for participatory and empowered governance.” UCL Global Governance Institute, Podcast 12: Daniel Schmachtenberger – Existential Risk and Phase Shifting to a New World System
“There’s a dedicated chapter which we call, Existential Risks as the Ultimate Disruption…going through very much the topics which organizations which focus on existential risks, so that includes the Center for Existential Risk in the University of Cambridge, the Center for Humane Technology, Future of Life Institute and many other organizations, that think about existential risk that go from anything that is threatening humanity itself or the ability for humans to be sustainable without major deterioration. Because worse than the end of humanity is probably that we’re so degraded and damaged that life is absolutely atrocious.” FuturePod Episode 146: Roger Spitz - Disruptive Futures
Latest policy-relevant research
Governing the security implications of AI
Because of AI’s widespread availability, an absolute ban on all military applications of AI is likely to be infeasible, but prohibiting or regulating specific use cases might be possible, according to a new report by Paul Scharre and Megan Lamberth. Potential advancements in AI could have profound implications for how countries research and develop weapons systems, and how militaries deploy those systems on the battlefield. The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems, while others have argued that AI may be too diffuse to control. Arms control is possible under the right conditions, and small steps today could help lay the groundwork for future successes. (12 October 2022)
Policy comment: An effort to develop broad governance around the use of AI in a military context would probably fail. AI is a particularly vexing governance problem: AI is dual-use; it’s use in warfare is not clear; verification is hard. AI governance needs to start in a discrete and manageable way, while focusing on a military application that is most near-term. This ‘minimum viable product’ would help build processes and trust between states, which could be leveraged to other aspects of military use of AI. Scharre and Lamberth suggest that states could adopt intrusive inspections, restrict physical characteristics of AI-enabled systems, regulate observable behavior of AI systems, and restrict compute infrastructure. The use of AI in certain military or political contexts - such cyber, autonomous weapons or disinformation - might be areas where major powers align and could first strike an agreement.
Evaluating the range of plausible AI futures is critical so that leaders can plan accordingly, according to a working paper by National Intelligence University researchers. This is because the field of AI continues to face the virtuous circle of feedback loops, complementary technologies, financial investments, talent, and self-improving AI systems. At the same time, maintaining visibility of the movement of AI conditions, their interactions and directionality, could help analysts keep track the overarching trends of the technology. While forecasting specific trajectories is untenable, understanding the broad outlines and potential sharp left turns could help ensure societal and institutional resilience. As we move toward this uncertain future, the risk of failures, goal misspecification, misalignment, or malicious use by a state or non-state group is unnervingly high but variable across differential technological paths. (10 November 2022)
Policy comment: National security establishments must consider, and plan for, the threat posed by AI and its integration with other capabilities. AI, along with other forms of advanced technologies, could be used maliciously by nefarious actors, such as terrorists, rogue states and organised criminal actors. Existing security threats are exacerbated if these actors seek to use AI to increase the scale, efficiency and speed of their attacks. AI empowers relatively unsophisticated individuals and non-state actors to conduct cyberattacks on digital systems and increase the lethality of conventional weapons systems. And integrating AI into defense capabilities and nuclear weapons systems could greatly disrupt nuclear stability arrangements. National security and intelligence agencies should use scenario analysis to map out different pathways of AI risk and find potential responses that would be applicable across multiple pathways.
Building resilience to natural GCR
More can, and must, be done to build humanity’s resilience to natural global catastrophic risks, according to forthcoming book chapter by Lara Mani, Doug Erwin and Lindley Johnson. Natural catastrophic risks, encompassing hazards such as near-Earth object (NEO) impacts and large-magnitude eruptions, have threatened the continued flourishing of humanity since its origins, with the geological record providing us with a unique window to understand their impacts. A survivors bias continues to fuel narratives that the threats posed by such risks is negligible. Existential and global catastrophic risk scholarship fails to acknowledge the evolving risk landscape that humanity has created. With increased global populations and an over reliance on the systems and infrastructures that sustain our societies, we have cultivated a new mechanism by which natural hazards can cascade to become globally felt catastrophes. (17 October 2022)
Policy comment: Natural hazards pose a greater risk than previously assessed not because of the scale of the hazard but the systemic nature of our vulnerabilities. Modern society’s increasing reliance on technologically-based systems and infrastructure means that even a less-than-extreme volcanic eruption, near-Earth object or coronal mass ejection could constitute a global catastrophic risk. The authors suggest civil protection and resilience strategies. But understanding where those fragilities lie is a first critical step: on what systems - societal, technological, economic, political - do we rely; where are they most vulnerable; how would a small shock ripple through the system? Government agencies responsible for critical infrastructure must lead on this exercise, and test their vulnerability assessments against the likelihood of extreme and catastrophic risks.
Messaging extreme climate risk
Research into extreme climate change scenarios might lead to climate change being portrayed as unsolvable and inevitable, raising the potential for fear and hopelessness, and may even trigger inaction, according to Bhowmik et al. in their response to a Kemp et al. Perspective in August. A climate change research agenda should focus on efforts at the “glocal” scale (societies and communities of roughly 10,000 to a million people), where adaptation and mitigation can be effectively deployed and benefits maximized. In response, Kemp et al. note that there is no strong evidence that discussing extreme risks will cause fatalism, and that both hopeful and fearful messaging have mixed results. In other domains, such as finance and medicine, society expects a full diagnostic to address risk. The research agenda proposed can be considered a full planetary diagnostic. (2 November 2022)
Policy comment: Policy-makers can, and should, investigate worst-case scenarios without necessarily broadcasting the findings widely. Policy efforts to prevent climate change might not differ dramatically between baselines scenarios and more extreme scenarios - very similar polices might be required regardless. However, study of worst-case scenarios might change approaches to preparedness and resilience. Should extreme climate change scenarios present a greater risk than expected, policy-makers could take measures that improve a country’s ability to withstand greater shocks. Kemp et al. state that the two proposed research agendas are “neither alternatives nor in tension”. In fact, they are mutually reinforcing by highlighting the different policy responses that might be needed across the different climate change pathways.