This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
Policy efforts on GCR
On 3 April, the White House Office of Science and Technology Policy released the National Preparedness Strategy and Action Plan for Near-Earth Object Hazards and Planetary Defense:
“It establishes six key goals for the decade ahead:
Enhance NEO detection, tracking, and characterization capabilities.
Improve NEO modeling, prediction, and information integration.
Develop technologies for NEO reconnaissance, deflection, and disruption missions.
Increase international cooperation on NEO preparedness.
Strengthen and routinely exercise NEO impact emergency procedures and action protocols.
Improve U.S. management of planetary defense through enhanced interagency collaboration.
This strategy advances the Biden-Harris Administration’s commitments to U.S. leadership and international cooperation in space, and builds on existing efforts by federal departments and agencies. By working toward these goals, our nation will be able to more effectively detect, prepare for, and thwart NEO hazards.”
On 29 March, the UK’s Secretary of State for Science, Innovation and Technology presented to Parliament a policy paper, “A pro-innovation approach to AI regulation”:
“Many AI risks do not fall neatly into the remit of one individual regulator and they could go unaddressed if not monitored at a cross-sector level. A central, cross-economy risk function will also enable government to monitor future risks in a rigorous, coherent and balanced way. This will include ‘high impact but low probability’ risks such as existential risks posed by artificial general intelligence or AI biosecurity risks.”
GCR in the media
“A thought experiment for regulating AI in two distinct regimes is what I call The Island. In this scenario, experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”…Any of these solutions are going to require an extraordinary amount of coordination between labs and nations. Pulling this off will require an unusual degree of political will, which we need to start building now.” We must slow down the race to God-like AI (Financial Times)
“A GAO report released in January revealed multiple gaps in the federal government’s policing of the riskiest kinds of experiments. The report, which identified no labs by name, said the Department of Health and Human Services is providing ‘subjective and potentially inconsistent’ oversight of U.S.-funded research. Additionally, ‘HHS does not conduct oversight’ of research funded by foundations and other nongovernment groups, even when the work involves ‘enhancement of potential pandemic pathogens.’” Research with exotic viruses risks a deadly outbreak, scientists warn (The Washington Post)
“I’d love to live in a world where how we respond to existential risk wasn’t up to chance or what happens to catch the public’s and the media’s attention, one where risks to the security of our whole world received sober scrutiny regardless of whether they happened to make the headlines. In practice, though, we seem to be lucky if world-altering dangerous research — whether on AI or biology — gets any public scrutiny at all.” Why we’re scared of AI and not scared enough of bio risks (Vox)
“But if AGI does become reality, it would likely represent a seminal moment of human history and development, with some even fearing it could represent a technological singularity, a hypothetical future moment when humans lose control of technological growth and creations gain above-human intelligence. Around 58% of the Stanford researchers surveyed called AGI an ‘important concern’. The survey found that experts’ most pressing concerns is that current A.I. research is focusing too much on scaling, hitting goals, and failing to include insights from different research fields.” A.I. could lead to a ‘nuclear-level catastrophe’ according to a third of researchers, a new Stanford report finds (Fortune)
“Scientists have explained what would happen if an asteroid was on a collision course with Earth to emphasize the need for planetary defense. The hypothetical asteroid scenario illustrates how an asteroid threat might evolve over several years and the potential devastation such a strike could cause. The team led by the manager of NASA's Near Earth Object (NEO) Program Office Paul Chodas presented the exercise at the 8th Planetary Defence Conference in Vienna, Austria on Monday, April 4.” This is what would happen if scientists found an asteroid heading to Earth (Space.com)
“There’s a term used by some long-term thinkers that I believe deserves to be known more widely: existential hope. This is the opposite of existential catastrophe: It’s the idea that there could be radical turns for the better, so long as we commit to bringing them to reality. Existential hope is not about escapism, utopias or pipe dreams, but about preparing the ground: making sure that opportunities for a better world don’t pass us by. So, if taking the long view demands anything of us, it is this: a commitment to seeking and cultivating hope when all feels bleak. This may well prove to be the grandest challenge of our time, but it is what we owe to our predecessors and our descendants.” Existential hope: How we can embrace deep time and create the brightest of futures (Big Think)
Latest policy-relevant research
Regulating high-risk AI
It is plausible that ensuring that powerful AI agents don’t seek power over humans in unintended ways will be difficult, that they will end up deployed anyway to catastrophic effect, and that whatever efforts made to contain and correct the problem will fail, according to a forthcoming essay by Joe Carlsmith. The existential risk at stake is disturbingly high, that is, greater than 10 per cent by 2070. (March 2023)
Appropriately regulating artificial intelligence is an increasingly urgent policy challenge, according to a paper by OpenAI Senior Policy Advisor Gillian K. Hadfield and Anthropic Co-founder Jack Clark. Legislatures and regulators lack the specialized technical knowledge required to best translate public demands into legal requirements. Regulatory markets could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives. (13 April 2023)
General purpose artificial intelligence (GPAI) carry serious risks and must not be exempt under the forthcoming EU AI Act, according to a policy brief led by AI researchers from the AI Now Institute, Distributed AI Research Institute, Mozilla Foundation and Yale ISP. Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. (13 April 2023)
Policy comment: The fundamental policy challenge for regulating AI is the speed of development combined with the wide-spread yet uncertain applicability and impact across economies and societies. Risk-based regulatory approaches - such as those being adopted in various forms by the US, the EU, the UK and Canada - will struggle to manage this challenge and future-proof themselves against the range of possible harms. Although a logical starting point, these approaches rely on interpretations and management of risk that are susceptible to loopholes or rapid obsolescence. Governments should also explore more agile regulatory approaches. These instruments could include regulatory markets, semi-private governance systems, insurance mechanisms and other regulatory intermediaries. Policy advocates should develop detailed policy proposals for legislating and developing these regulatory intermediaries, which might require more sophisticated governance arrangements or entirely new institutions.