GCR Policy Newsletter (26 June 2023)
Public health framing, catastrophic AI risk, nuclear winter research and AI-space governance
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk (GCR). It looks at policy efforts around the world to reduce the risk and identifies insights from the research for policymakers and advocates.
Policy efforts on GCR
“The [US] Defense Department currently is modernizing its nuclear command, control, and communications systems, including through the widespread integration of advanced AI systems. Some analysts fear that this process will dilute human control over nuclear launch decision-making. To ensure that machines never replace humans in this momentous role, a bipartisan group of legislators introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act on April 26. If enacted, the law would prohibit the use of federal funds to ‘use an autonomous weapons system that is not subject to meaningful human control…to launch a nuclear weapon; or…to select or engage targets for the purposes of launching a nuclear weapon.’” ChatGPT Sparks U.S. Debate Over Military Use of AI (Arms Control Today)
“We are pleased to see many important commitments to strengthening the UK’s capabilities for preventing, detecting and responding to biological threats in the Biological Security Strategy (BSS), published on 12 June 2023. We particularly welcome commitments to formalise the Government’s biosecurity leadership, governance and accountability structures, to invest in the UK’s real-time biosurveillance and detection capabilities, and to lead internationally in establishing standards of best practice for responsible innovation. We also commend the Government on allocating £1.5 billion per year to support this work, but urge the Government to continue to sustain a level of investment commensurate with the urgency and importance of implementing the BSS’ priority outcomes.” Response to the UK Government’s refreshed Biological Security Strategy (BSS) (The Centre for Long-term Resilience)
GCR in the media
“In the face of increasing and interconnected threats, a new approach is urgently needed. The existential risk posed by nuclear weapons should worry everyone, but only world leaders are in a position to mitigate it. The ultimate goal will always be total nuclear disarmament, but the first priority is to address the most serious immediate hazards. Practical and achievable paths to de-escalation exist, such as the “Four D’s” minimization agenda that The Elders have been advocating since 2019. Now, facing the prospect of a new arms race, it is up to responsible leaders, civil society, and a mobilized public to put sufficient pressure on the leaders of nuclear-armed states and pull them back from the brink.” Pulling Nuclear Powers Back from the Brink (Project Syndicate)
“Rather than developing a superweapon themselves, a modern terrorist group could carry out a form of sabotage, a spoiler attack, to cause a cataclysm…‘To combat existential terrorism, governments should focus on incorporating terrorism-related risks into broader existential risk mitigation efforts,’ says Kallenborn. ‘For example, when thinking about artificial super intelligence risks, governments should think about how terrorists might throw a wrench in their plans or simply ignore safeguards.’” New Report Warns Terrorists Could Cause Human Extinction With ‘Spoiler Attacks’ (Forbes)
Latest policy-relevant research
Framing GCR as a public health challenge
The World Health Organization (WHO) should play a leading role on public health policy relating to nuclear winter, according to a new academic paper. The WHO are uniquely positioned to address a global health issue of this magnitude and it can coordinate the work with National Public Health Institutes and other relevant agencies within the UN system. Specifically, the WHO should reintroduce the topic of nuclear weapons to its ongoing work and explicitly include nuclear winter. This work should include a new guiding report to set the initiative for other public health work on nuclear winter as well as regular updates to account for new developments in nuclear winter science and policy. (15 June 2023)
The medical community is well placed to ensure that the potential uses of AI are tempered by an understanding of their public health risks, according to an opinion piece in medical journal, The BMJ. Any matter that could harm the health of millions of people - and that experts fear could be catastrophic - is definitionally a public health risk. Although it is difficult to specify the nature of a theoretical future threat, AI already has the power to cause great harm, even in the hands of benevolent agents. A powerful algorithm misaligned with users’ values - or in the hands of nefarious agents - could pose even greater risks. To reduce the health risks of AI, the medical community should advocate for regulations endorsed by AI experts and which are consistent with those that typically govern science, medicine, and public health. (12 June 2023)
Policy comment: Global catastrophic risk, such as from nuclear war or AI, could be framed for policymakers through a public health lens. Risks that could kill or injure millions, even billions, of people, are fundamentally health challenges. And public health officials and healthcare systems will need to prepare for, and respond to, these threats. In some cases, health professionals have been influential voices on global catastrophic risk. For example, the International Physicians for the Prevention of Nuclear War and the International Campaign to Abolish Nuclear War were physician-led advocacy efforts. Pushing nuclear-armed states to reduce the risk of nuclear war is challenging due to the dominant defence and security frame. So other frames - public health, the environment, human rights - might be worth exploring. However, there is a risk in overplaying the public health card. Health officials could become increasingly politicised, making their input on future risks, or general public health issues, face increasing suspicion. The COVID-19 pandemic revealed, or exacerbated, these suspicions due to poor (or perceptions of poor) advice. Health experts should partner closely with GCR experts, and vice versa, when considering advice and advocacy on these issues.
Clarifying catastrophic AI risk
According to new report by the Center for AI Safety, the development of advanced AIs could lead to catastrophe from four primary source: malicious use, AI races, organizational risks, and rogue AIs. The potential for malicious use can be mitigated by various measures, such as carefully targeted surveillance and limiting access to the most dangerous AIs. Safety regulations and cooperation between nations and corporations could help us resist competitive pressures driving us down a dangerous path. The probability of accidents can be reduced by a rigorous safety culture, among other factors, and by ensuring safety advances outpaces general capabilities advances. Finally, the risks inherent in building technology that surpasses our own intelligence can be addressed by redoubling efforts in several branches of AI control research. (21 June 2023)
Policy comment: Articulating and dissecting the catastrophic risks from AI has important implications for policy advocates and policymakers. Each source of AI risk is a different technical and policy challenge. Effort to create catch-all AI governance and safety policies might fail to adequately cover various risks, and, in the worst case, exacerbate some risk pathways. Structuring AI risk in this way also helps avoid the communication challenge faced when calling AI an existential risk. These catastrophic threats are more likely, more tangible and more immediate. Researchers and advocates should be ready to frame AI risk though a catastrophic risk lens, focusing on where the risks can be most clearly recognized and understood, such as malicious risks and exacerbating the threat of cyber-warfare and automated weapons. Policy options for these risks require further detail and specificity.
Funding nuclear winter research
There are still many scientific questions to address on direct effects of nuclear war, including the amounts of fuel in target areas, the spread of urban fires, the altitudes of soot injection from mass fires, the impacts on the biota of ozone depletion and increased surface ultraviolet radiation, the spread of radioactive material in the atmosphere and oceans, and the impacts on agriculture and famine, according to an opinion piece by leading nuclear winter researchers. These researchers have not been able to obtain funding for this work from the US Department of Energy, Department of Defense or Department of Homeland Security. The US National Science Foundation and NASA, which are conventional funding agencies, also were not interested in considering proposals. (19 June 2023)
Policy comment: The researchers do not state why the US government has not funded their work, especially given that the National Academies are currently doing a study on potential environmental effects and socio-economic consequences following a nuclear war, including the risk of nuclear winter. The researchers’ support for the Treaty on the Prohibition of Nuclear Weapons (TPNW) might have made US government agencies less willing to engage. Researcher should seek support from other countries - such as middle powers that support the TPNW - for nuclear winter research, and engage in existing research processes in nuclear-armed states, such as the UK and US. Nuclear winter research is critical for providing an evidence base for nuclear disarmament advocates both inside and outside government to continue pushing for action on nuclear risk.
Governing AI safety in space
Space and AI are both rife with unknowns, and their convergence poses serious risks if their development is not aligned with ethical AI principles, according to a new paper by researchers from the Centre for the Study of Existential Risk and the Center for Space Governance. Proactive national, bilateral and international cooperation is needed to develop, ratify and enforce technical and governance mechanisms. These mechanisms include rigorous design, verification and validation standards, agile oversight of explainable and transparent technologies, legal, regulatory and policy structures to ensure accountability, safety and security, and frameworks to prioritize financing for applications of AI in space in accordance with moral principles and to maximize societal benefits. (15 June 2023)
Policy comment: Building international governance and policy around the use of AI in space will be extremely difficult. International governance around space and AI are both severely lacking, AI capabilities in space-related infrastructure remains very nascent, and geopolitical challenges limit opportunities for collaboration. However, this early phase in AI governance provides an opportunity to shape both AI and space policy in a positive direction. Countries with major space programs could promote safe integration of AI into space, both domestically and internationally. The UN’s Committee on the Peaceful Uses of Outer Space should consider risks and challenges of AI in its 2024 agenda (as the 2023 session has just concluded), particularly in the lead-up to the Summit for the Future. But this will require championing by at least one member state, and therefore advocacy by AI and space policy advocates in these states.