GCR Policy Newsletter (17 October 2022)
Malevolent AI risk, AI-nuclear nexus, natural risks and existential security
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“Policy makers, philanthropists, and scientists can look to Cold War-era risk reduction for next steps on global catastrophic risks. On AI governance, the Cold Warrior’s favorite tools of confidence-building measures can build trust and reduce the risk of misunderstandings. On autonomous weapons, policy makers can emulate the success of the 1972 Incidents at Sea Agreement, and consider mechanisms like an International Autonomous Incidents Agreement to create a process for resolving disputes related to AI-enabled systems. On nuclear security, policy makers can update Cold War risk-reduction measures, including projects to increase the resilience of crisis-communications hotlines.” Sixty years after the Cuban Missile Crisis, how to face a new era of global catastrophic risks
“While he regards asteroid deflection technology as prudent, Rees worries more about the misuse of biotechnology (particularly experiments that create toxic viruses), artificial intelligence, pandemics and, of late, nuclear aggression.” Deflecting asteroids is only one thing on humanity’s worry list
“In a recent study, a collaborative team of researchers discuss the potential for future lunar settlers to establish a backup data storage system of human activity in the event of a global catastrophe on Earth that could be used to recover human civilization on a post-catastrophe planet.” The Moon is the Perfect Spot for Humanity’s Offsite Backup

Latest policy-relevant research
Limiting AI risk
Artificial intelligence, in and of itself, does not represent a credible existential threat to humanity, according to Milan M. Ćirković, a research associate of the Future of Humanity Institute. The advent of general AI is likely to bring about currently inconceivable problems. However, all these problems stem from the instrumental use of AI and related technologies - that is, the misuse or abuse by malevolent actors. Although AI can magnify the conduct of evildoers by a large factor, the root cause is the existence of evildoers among humans. (30 September 2022)
Policy comment: Based on this view, AI policy should be focused on reducing the ability of malevolent actors to develop or access advanced AI systems. In contrast, Luke Kemp has argued that the malevolent actors, such as terrorists, are not the source of the risk. Instead, a small group of powerful actors, such as military-industrial complexes and Big Tech, are developing the capabilities and technologies that cause global catastrophic risk. In either case, policy-makers should consider the actors, not just the systems. Although building AI systems that are safe or aligned might be important, reducing AI risk also requires changing the incentive structure for actors that are increasing risk or seeking to cause harm.
Integrating modern AI machine-learning programs with nuclear command, control, and communications (NC3) systems could mitigate human error or bias in nuclear attack decision-making, according to an Arms Control Today paper by a research fellow of The Cambridge Existential Risks Initiative. In doing so, AI could help prevent egregious errors from being made in crisis scenarios when nuclear risk is greatest. But AI comes with its own set of unique limitations. If unaddressed, they could raise the chance of nuclear use in the most dangerous manner possible: subtly and without warning. (September 2022)
Policy comment: Despite the risks, the integration of AI within the NC3 structures of the US and other countries will almost certainty continue. Indeed, decision-makers and practitioners of nuclear policy and doctrine might not be fully aware of the risks. And, even with full awareness, the perceived need to lead on military capability development could lead to rushed integration without the required technical safeguards. Poor or incomplete technical solutions puts greater onus on nuclear policy and doctrine to lower risk. Nuclear weapons states would need to upgrade their decision-making processes for launching a nuclear weapon, such as by moving away from launch-on-warning strategies or by ‘de-alerting’ silo-based intercontinental ballistic missiles. These countries must also begin dialogue about the complexities of verification and deterrence under AI-enabled NC3 structures.
Reducing vulnerabilities to natural risks
The risk of global catastrophe from natural sources may be significantly larger than previous analyses have found, according to new paper by Seth Baum, Executive Director of the Global Catastrophic Risk Institute. Almost all natural GCR scenarios - such as natural climate change, natural pandemics, near-Earth objects, space weather, stellar explosions, and volcanic eruptions - involve important interactions between the natural hazard and human civilization. Several natural GCR scenarios may have high ongoing probability. Deep human history provides little information about the resilience of modern global civilization to natural global catastrophes. A case can even be made for abandoning the distinction between natural and anthropogenic GCR. (12 October 2022)
Policy comment: Policy-makers should understand and analyse global catastrophic risk holistically rather than a lens that is narrowed by specific hazards or categories of hazard. A hazard-specific view might drive misguided prioritization across the set of risks when there remains a high degree of uncertainty in probability and impact assessments. A holistic view requires a strong assessment of the vulnerabilities that allow for global catastrophe. In this view, greater priority should be given to increasing the resilience of civilization to a set of global catastrophes scenarios. These measures could include, as Baum suggest, hardening infrastructure, increasing local self-sufficiency and making contingency plans.
An assessment of the conditions under which civilizational collapse may occur due to climate change would improve the ability of the public and policymakers to address the threat is poses, according to a PNAS opinion piece. Climate science literature (such as the assessment reports of the Intergovernmental Panel on Climate Change) has little to say about whether or under which conditions climate change might threaten civilization. Three civilizational collapse scenarios - local collapse, broken world and global collapse - are possible. There is no solid basis at present for dismissing the broken world and global collapse as too unlikely to merit serious consideration. (6 October 2022)
Policy comment: The risk to civilization from climate change is broader than just the direct climate impacts. Rather, the risks probably arise from the interaction with other vulnerabilities, such as social, economic and political factors. Policymakers and researchers should promote more rigorous scientific investigation of the mechanisms and factors of civilizational collapse involving climate change. Futures and horizon-scanning functions of government could better map how climate change leads to domestic collapse and share that knowledge with other countries and multilateral organizations. This understanding would also inform policy measures to reduce the vulnerabilities that exacerbate climate change risk, such as social cohesion, disinformation and misinformation, governance gaps and supply chain resilience.
Attaining existential security
We need to reach existential security, according to Toby Ord in an essay in the UN’s 2021/2022 Human Development Report. Existential security is inherently international: the risks that could destroy us transcend national boundaries, and finding ways forward that never once succumb to an existential catastrophe will require international coordination. Meeting this challenge would be an extremely difficult but necessary task. An institution aimed at existential security would need to be at the forefront of forecasting expertise, require extremely high trust, have extremely strong coordinating ability and a great deal of buy-in. (9 September 2022)
Policy comment: As Ord recognizes, the strong buy-in required to get an institution that governs existential risks off the ground is far from a reality. And increasing strategic competition between major powers makes the prospect of such an institution even more distant. National-level policies on global catastrophic risk will be important first steps to test policy ideas and pre-position a multilateral effort. And it is not clear if a new international organization dedicated to existential security is needed or would achieve its intended outcome. A more targeted effort that expands on existing international cooperation - for example, around disaster risk management or near-Earth objects - could allow for an iterative and adaptive approach to dealing with global catastrophic risk.
Policy database updates
The GCR Policy team continues to collect policy ideas put forward by the field of existential and global catastrophic risk studies. The database currently contains over 800 ideas, of which over 300 are listed publicly here. We will continue to update the database with the latest research and policy ideas. Contact us for more information or access to the full database.
This newsletter is prepared by the team of GCRpolicy.com. Subscribe for twice-monthly updates on the latest policy news and research on global catastrophic risk.