GCR Policy Newsletter (15 May 2023)
Food system resilience, intelligence communities and monitoring AGI labs
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“One of the most plausible existential risks from A.I. is a literal Skynet scenario where we create increasingly automated drones or other killer robots and the control systems for these eventually go rogue or get hacked. Militaries should take precautions to make sure that human operators maintain control over drones and other military assets.” Artificial Intelligence Is Not Going to Kill Us All (Slate)
“In his interview with CBS, Kissinger acknowledged that an A.I. arms race represented a completely different ballgame from the race to develop nuclear weapons, given the vast unknowns. ‘[I]t’s going to be different. Because in the previous arms races, you could develop plausible theories about how you might prevail. It’s a totally new problem intellectually,’ he said.” Henry Kissinger says he wants to call attention to the dangers of A.I. the same way he did for nuclear weapons but warns it’s a ‘totally new problem’ (Fortune)
“Before superintelligence and its human extinction threat, AI can have many other side effects worthy of concern, ranging from bias and discrimination to privacy loss, mass surveillance, job displacement, growing inequality, cyberattacks, lethal autonomous weapon proliferation, humans getting “hacked”, human enfeeblement and loss of meaning, non-transparency, mental health problems (from harassment, social media addiction, social isolation, dehumanization of social interactions) and threats to democracy from (from polarization, misinformation and power concentration)…If unaligned superintelligence causes human extinction in coming decades, all other risks will stop mattering.” The 'Don't Look Up' Thinking That Could Doom Us With AI (Time)
“Resilience against one risk may also aid readiness against another. The COVID-19 pandemic has demonstrated the world’s lack of preparedness to cope with events that disrupt global supply chains and infrastructure, with shocks in one part of the world causing ripple effects throughout. The lessons learned should inform not just how we mitigate likely risks, but also how we build resilience against worst-case scenarios or unexpected outcomes. Having an awareness of all identified risk sources can help governments to formulate long-term plans for each, without allowing a singular issue to consume disproportionate energy and resources.” Catastrophic risks are converging. It’s time for researchers to step out of their silos. (Bulletin of the Atomic Scientists)
Latest policy-relevant research
Internationalizing food system resilience
Anthropogenic emissions triggering ‘runaway global warming’ (∼8–12 °C+) could cause rapid decline in food production by mid-century and unequal distribution of over 5 billion starvation deaths by 2100, according to modeling by University of Cambridge researchers. While the most vulnerable and least resilient populations will be hit the hardest, more developed countries are also vulnerable. Such a catastrophe could provoke sociocultural, economic and political dysfunction locally as well as international conflicts and large-scale migration. Good risk management requires understanding of a range of climate change scenarios, including worst-cases. (2 May 2023)
Policy comment: Developing alternative food sources (such as mushrooms, seaweed and insects) and climate-resilient crops and seeds is a huge policy opportunity for countries in the Global North. A successful effort would support wins across multiple policy areas: increasing food system resilience in the face of extreme risk; supporting local agricultural efforts; helping foster national industries around alternative foods; reducing domestic and global malnutrition; and supporting diplomatic engagement with the Global South, which is most at risk from climate change. Ultimately, a holistic reform of the food system is needed given that it contributes around 20-40 per cent of global greenhouse gas emissions. Major agricultural exporters - such as the EU, the US, Canada and Australia - should be first in prioritizing food system reform and resilience.
Utilizing intelligence communities for existential risk analysis
The United States should lead the way on existential risk mitigation, and the US intelligence community (IC) can contribute by working with US allies to analyze and understand the risks of AI, according to Mark Bailey, Chair of the Cyber Intelligence and Data Science Department at National Intelligence University. Even if AGI doesn’t become a reality within the next decade, the subversive agential behavior of contemporary weak AI systems is extremely disconcerting. This problem will only become more severe as this technology advances. The Intelligence Community should prioritize getting in front of this problem now before AI becomes so advanced and ubiquitous that it can no longer be controlled. (9 May 2023)
Policy comment: Given the proper support, intelligence communities, and particularly intelligence analysts, could play a crucial role in detecting, analyzing, and understanding threats of an existential nature. Several small but sensible steps will make intelligence an important enabler of governments’ efforts on existential risk. Existential threats need to be acknowledged in policy documents as explicitly within the responsibility of intelligence work. Specific resources should be allocated towards analyzing and warning about existential threats and global catastrophes. For example, an extreme global threats warning team sitting within the central analytical agency could work across the intelligence community to identify and track these risks. Intelligence communities should regularly issue reports on issues relating to existential threats, with extreme climate change, advanced AI, engineered pandemics, and near-earth objects as the most logical initial cases. The final ingredient is increasing collaboration and relationships inside and outside government around existential risks, including consistent, formalized communication channels with scientific organizations, technology companies and domestic policy agencies.
See also Existential espionage: How intelligence gathering can protect humanity
Monitoring AGI labs
There was a broad consensus that AGI labs should implement most of the safety and governance practices in a 50-point list, according to a survey of leading experts from AGI labs, academia, and civil society. Respondents agreed especially strongly that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. (11 May 2023)
Policy comment: Governments can be confident in applying early levels of regulation and standards on AGI labs given the consensus of AI experts, including from the labs themselves, around the safety and governance measures that are necessary. These set of options - 50 in this survey - are for AGI labs to self-manage risk. In such a model, regulators would need to develop the standards and requirements, monitor compliance and punish breaches. For most developed countries, such a model would probably require new legislative and bureaucratic structures. Few existing regulatory bodies would have the responsibility and capability to administer these regulations and standards. And this mechanism alone might also not be sufficient. Policy-makers should also consider liability and insurance schemes and other forms of private or semi-private regulation. Researchers might wish to consider conducting a similar survey for comparing the different models of AI governance.