GCR Policy Newsletter (13 June 2023)
Slowing down AI, integrating risk analysis and response, and scenario development
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk (GCR). It looks at policy efforts around the world to reduce the risk and identifies insights from the research for policy-makers and advocates
GCR in the media
“Making politicians aware of a societal problem or risk — even an existential one — doesn’t mean they’ll actually do anything about it. For example, we’ve known about the harms posed by climate change for decades but have only recently begun to take meaningful steps to mitigate it. The existential risk posed by nuclear weapons isn’t likely to be solved by the government — they’re the ones with the nukes in the first place. Same with the risk of bio-engineered pandemics. The problem isn’t that our elected leaders don’t know these risks exist. It’s that until they’re properly incentivized, they won’t actually deal with it.” A campaign plan to put AI regulation in the political zeitgeist (The Hill)
“Again, consider the public as well as the political perception. If some well-known and very smart players in a given area think the world might end but make no recommendations about what to do about it, might you decide just to ignore them altogether? (“Get back to me when you’ve figured it out!”) What if a group of scientists announced that a large asteroid was headed toward Earth. I suspect they would have some very specific recommendations, on such issues as how to deflect the asteroid and prepare defenses.” What the Breathless AI ‘Extinction’ Warning Gets Wrong (Bloomberg)
“Artificial intelligence (AI) could create a “dystopia” a government minister has warned after Rishi Sunak’s adviser on the technology said it could become powerful enough to “kill many humans” in only two years’ time. Matt Clifford said even short-term risks were “pretty scary”, with AI having the potential to create cyber and biological weapons that could inflict many deaths.” AI could ‘kill many humans’ within two years, warns Sunak adviser (Independent)
“One day, the tech industry’s Cassandras say, companies, governments or independent researchers could deploy powerful A.I. systems to handle everything from business to warfare. Those systems could do things that we do not want them to do. And if humans tried to interfere or shut them down, they could resist or even replicate themselves so they could keep operating. ‘Today’s systems are not anywhere close to posing an existential risk,’ said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. ‘But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.’ How Could A.I. Destroy Humanity? (The New York Times)
Latest policy-relevant research
Slowing down AI
AI experts who signed Future of Life Institute’s (FLI) Open Letter calling for a six-month pause on the training of AI systems more powerful than GPT-4 felt differently about the risks of AI and about viable solutions, according to a paper by two MIT students, Isabella Struckman and Sofie Kupiec. Although a few aligned with the letter’s existential risk focus, many were far more preoccupied with problems relevant to today. But whatever their perspective on the details, each expert shared a wish for the field they work in and love to do something unheard of in computer science culture: slow down, forget the tech for a moment, and consider its context. (1 June 2023)
Policy comment: Signed letters and statements can bring significant attention to an issue. But, without a clear plan, especially one with specific policy proposals, policy-makers are less likely to take the concern seriously or adopt effective policy measures. The window for policy change created by these efforts might instead provide an opportunity for the policy narrative and direction to be set by more highly organized and well-funded advocacy efforts. With AI governance and regulation remaining quite nascent, those concerned with all manner of AI harms and risks would benefit from more consensus on common solutions. For example, liability schemes for AI harms or improvements to interpretability would help across the threat spectrum. The global summit on AI, to be hosted by the UK later this year, should be a forcing function for the AI ethics and safety communities to formulate specific policy proposals, form consensus and present a united front.
Viewing the integrated set of global catastrophic risks
Transformative AGI by 2043 is 0.4 per cent likely, according to a contest submission by Ari Allyn-Feuer and Ted Sanders. The estimation was based on a set of software, hardware and sociopolitical factors. Transformative AGI could be derailed by geopolitical competition, pandemics or severe economic depressions. (5 June 2023)
Based on recent historical trends, global temperature increases would pass 3.2°C±1.1°C in 2050 and 14°C±7°C of pre-industrial temperatures by the end of this century, according to a non-peer reviewed preprint paper. This far the IPCC’s worst-case scenario. (23 May 2023)
Policy comment: Assessing and reducing the risk global catastrophic risk requires an integrated approach. The trajectory of specific global catastrophic threats relies on the interdependence with global catastrophic risk as a whole. Nuclear war, climate change, pandemics and artificial intelligence, for example, will each be shaped by how they all unfold. From a research perspective, a more integrated approach calls into question attempts at estimating the trajectory of long-term and uncertain risks, which inevitably relies on highly variable assumptions about near-term factors. It also requires bringing expertise from other domains, such as international relations and economics. From a policy perspective, an integrated approach is required to develop solutions that reduce risk as a whole. Advocates may wish to prioritise solutions that reduce the collective set of global catastrophic risks, and should ensure that solutions for specific threats do not unintentionally increase overall risk.
Developing policy across risk scenarios
The range of potential AI risks is broad, from social instability and value erosion to unexpected accidents, cascading failures, and collapse, according to a Futures paper by National Intelligence University researchers. Many of the more extreme risks are downplayed as unconvincing, highly improbable, or impossible by some researchers, but as systems continue to scale to new milestones, more concerted attention is warranted. While some of the AI scenarios are highly speculative, they are grounded in current research and within the range of possible futures. Indeed, if there is a nonzero chance of an extreme AI scenario - unintentionally escalating conflict, shifting the balance of power, or compromising control - leaders must challenge assumptions, incorporate uncertainty, and test the boundaries of what is possible. (18 May 2023)
Policy comment: Scenario development - like that conducted in this paper - could be a powerful policy tool for analysing global catastrophic risk. Analytical techniques such as ‘cone of plausibility’ or ‘backcasting’ allow policy analysts to develop clearer pathways from the current state to the potential future state. These methods require consideration of the underlying drivers and assumptions leading to the scenario, as well as milestones and thresholds along the pathway - all of which could indicate the need for a policy response. Policy action should be prioritised for where thresholds or drivers sit across multiple scenarios. For example, malicious use of AI by non-state groups is relevant for different speeds and scales of AI progress. A policy response to this common driver would be identifying and tracking relevant groups, reducing their access to AI capabilities (such as hardware, software, people and funding) and using targeted action to dismantle highly concerning efforts.