GCR Policy Newsletter (19 December 2022)
Global governance, AI risk skepticism and great power relations
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies. (This edition is the final for 2022. We’ll back in January 2023)
GCR in the media
“Covid-19 has made it easier to imagine worse pandemics, including those created in laboratories and malevolently released. But the world holds other terrors. Some seem as though they’ve troubled us for decades (nuclear war) while others belong to the future (AI).” The woman preparing the UK for extreme risks (Prospect Magazine)
“What about the more immediate risks posed by non-superintelligent AI, such as job loss, bias, privacy violations and misinformation spread? It turns out that there’s little overlap between the communities concerned primarily with such short-term risks and those who worry more about longer-term alignment risks. In fact, there’s something of an AI culture war, with one side more worried about these current risks than what they see as unrealistic techno-futurism, and the other side considering current problems less urgent than the potential catastrophic risks posed by superintelligent AI.” What Does It Mean to Align AI With Human Values? (Quanta Magazine)
“If humanity’s technological progress can be compared to climbing a mountain, then the Anthropocene finds us perched on a crumbling ledge, uncertain how long we have until it collapses. The most obvious way out is to turn back and retrace our steps to an earlier stage of civilization, with fewer people using fewer resources. This would mean acknowledging that humanity is unequal to the task of shaping the world, that we can thrive only by living within the limits set by nature.” The End Is Only the Beginning (The American Scholar)
Latest policy-relevant research
Improving global governance arrangements
Ten key risks are identified in the Global Catastrophic Risks 2022 report by the Global Challenges Foundation. The risks are organised into three main categories: current risks from human action, natural catastrophes, and emerging risks. Many of those risks are closely interconnected, and their boundaries sometimes blur, as with climate change and ecological collapse, or as in the case of synthetic biology, which could be presented as a risk of its own, an additional risk factor in biological warfare, or a potential cause for engineered pandemics. (30 November 2022)
Policy comment: The international governance of global catastrophic risks is a mixed record. While near-Earth objects have reasonably advanced governance arrangements, governance of nuclear weapons is deteriorating and that of AI is very nascent. But delineating the risk landscape only by the threats, as the report does, limits the potential to build governance for systems that are closely linked to global catastrophic risk, such as energy, food and technology. And in one case - namely ‘global population size’ - assessment of governance arrangements gets into extremely murky territory. Comments by catastrophic risk researchers about population size could greatly turn off the public and policy-makers. A more useful product for policy-makers would be an updated cartography of global catastrophic governance, with insights on major gaps, challenges and roadblocks for further development.
Negating AI risk skepticism
AI risk skeptics need to realize that the burden of proof is not on AI safety researchers to show that technology may be dangerous but on AI developers to establish that their technology is safe at the time of deployment and throughout its lifetime of operation, according to a conference paper by Roman V. Yampolskiy. Designing a ‘safe AI’ is a much harder problem than designing an AI and so will take more time. Perhaps a temporary moratorium on AGI (but not AI) research similar to the one in place for human cloning needs to be considered. It would boost our ability to engage in differential technological development increasing our chances of making the AGI safe. AI safety research definitely needs to get elevated priority and more resources including funding and human capital. (15 November 2022)
Policy comment: The public and policy-makers are likely to be highly skeptical of the threat from AGI given the variety of objections, valid or otherwise, that skeptics can raise. GCR and AI policy advocates must provide sober, reasonable, credible and specific assessments of AI risk that can be easily understood and digested. Providing vague, abstract, technical or philosophical arguments enable skeptic to push back on the risk. So focusing on policy for near-term AI risks might not only be prudent and tractable, it also enables advocates to build credibility on AGI risk and help take policy-makers on the rather technical journey. The author’s suggestion on an AGI moratorium would be extremely unlikely to occur, and pushing for such a policy might backfire. Efforts to fund AI safety and differential technological development would be more feasible.
Pushing great powers to prioritise existential security
When the great powers come to a shared understanding that an issue poses an existential threat to humanity and agree to take extraordinary measures for survival, then great power consensus has led to ‘macrosecuritization’, where states take action beyond the normal practices of international politics to reduce or eliminate the danger, according to a PhD thesis by Nathan Sears. Conversely, when one or more of the great powers contests this understanding of an issue or rejects the call for extraordinary measures, then the outcome has been macrosecuritization failure. Macrosecuritization fails because of conflicting securitization narratives that lead the great powers to prioritize national security over human security. (10 December 2022)
Policy comment: Strategic competition between the US and China is the defining geopolitical feature of the 21st century, and will fundamentally shape efforts that exacerbate or reduce global catastrophic risk. National security and prosperity interests will complicate efforts to reduce risks from emerging technology and military capabilities. However, building a narrative around joint leadership in the face of global risk could help galvanize bilateral and multilateral efforts between the two powers - climate change being an example. Arriving at a more stable strategic system more quickly could also allow the two countries to collaborate on shared challenges, similar to US-Soviet agreements on non-proliferation. GCR policy advocates and policy-makers in middle powers should find opportunities to raise global catastrophic risk onto the agenda of the two major powers, reduce tensions between them, and cordon off global catastrophic risks from the rest of the bilateral tensions and strategic rivalry.
On global governance, I've been reading Michael Bess's 'Planet in Peril' (CUP, Oct 2022) and he describes a lot of interesting options for global governance of GCRs, projecting pathways to them across the next 100 or so years, with a range of thought provoking vignettes. See here: https://www.cambridge.org/nz/academic/subjects/earth-and-environmental-science/environmental-policy-economics-and-law/planet-peril-humanitys-four-greatest-challenges-and-how-we-can-overcome-them?format=HB&isbn=9781009160339