GCR Policy Newsletter (6 February 2023)
Understanding risk, AI regulation, biochemical threats and definitions of risk
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“We asked our network members what the world can do to set back the Doomsday Clock, and for their thoughts on how humanity can best avoid destroying both itself and the planet.” Network reflections: What one thing could the world do to turn back the Doomsday Clock? (European Leadership Network)
“These high-impact, low-probability events are poorly understood, but they’re not the only ways that climate change can have a severe effect on the planet. As existential risk researcher Luke Kemp has noted, a much warmer world is less resilient to other kinds of catastrophic risks. It’s harder to imagine humanity bouncing back from a terrible pandemic or a nuclear war in a world with catastrophic levels of warming. Climate change isn’t only a doomsday risk in itself—it’s a risk multiplier that increases our vulnerability to all kinds of events.” Is It Time to Call Time on the Doomsday Clock? (Wired)
“The move to develop tests, treatments and vaccines for a range of threats marks a shift in strategy for the department. For years, DoD responded to potential chemical and biological attacks in the field by developing tools to combat a specific list of hazards. But the list continued to grow…‘The technology is accelerating at such a pace that the threat profile and the diversity of the threat and the attributes of the threat have increased and will continue to do so at a quick clip,’ Watson [DoD’s deputy assistant secretary for chemical and biological defense] said. ‘We can’t develop a countermeasure for every single one of those, every single toxin, every single biological potentiality, every single chemical potentiality.’” New worldwide threats prompt Pentagon to overhaul chem-bio defenses (Politico)
“Complex systems can range from a nuclear power plant to Earth’s ecosystem. In tightly wound and complex systems, not even experts can be entirely sure how the inner workings of the system will respond to stresses and shocks. Those who study systemic and catastrophic risks have long been aware that crises in these systems are often endogenous — i.e., they often bubble up from within the system’s inscrutable internal workings.” Are we headed toward a “polycrisis”? The buzzword of the moment, explained. (Vox)
Policies to better understand global catastrophic risk
GCRPolicy has updated its policy paper on how governments can take action to improve their understanding of global catastrophic risk.
See the full paper here.
Governments must take measures to better understand global catastrophic risk and implement structures and processes that enable decision-makers to be better informed about the risk. A sufficient understanding of global catastrophic risk would enable more effective design of prevention, preparation and response measures.
A better understanding of global catastrophic risk includes: the set of threats and hazards; the vulnerabilities to the threats and hazards; pathways and scenarios for different risks; the drivers and factors that create and exacerbate risk; and implications on society, economy, security, environment and other policy priorities.
Governments must take strategic policy action to improve their understanding of global catastrophic risks across four areas:
Risk assessment: identify and analyse extreme and global catastrophic risks holistically to sufficiently inform policy decisions to manage them
Futures analysis: improve practice and use of futures analysis, including horizon-scanning, forecasting and foresight activities, to alert policy-makers to emerging risks and challenges and to facilitate better long-term policy
Intelligence and warning: improve intelligence and warnings capability on extreme and global catastrophic risks to inform governments on trends, events and risks in the global landscape
Science and research: Increase government’s science and research capability on global catastrophic risk so that policy solutions are supported by cutting-edge technical expertise
The paper provides 30 concrete actions that governments can take across each of these areas.
Latest policy-relevant research
Designing effective AI regulation
The dominant mode of proposed AI regulation is risk regulation - and one that depends heavily on internal risk assessments and mitigation and largely eschews a wide range of other regulatory tactics, according to a paper by Margot Kaminski, a law professor specializing in the law of new technologies. Issues around AI regulation stem from the nature of risk regulation itself and failures to consider other tools in the risk regulation toolkit. (February 2023)
Regulators are not neutral agents who receive and filter business interests and balance them against societal concerns over ethical AI, according to a draft book chapter by Regine Paul, a public policy academic from University of Berlin. Neither are they just the victim of regulatory capture of big business without their own agency. (November 2022; forthcoming in winter 2023/24)
Policy comment: Effective AI regulation will be extremely difficult for national governments to design and implement. It can fall into a number traps: difficulties assessing harms which might be contested and non-quantifiable; challenges around causality and control; imbalance of technological expertise between the private sector and the government; potential for regulatory capture. Efforts to reduce AI risk should consider the full spectrum of possible solutions: norms, standards, regulations, licensing, guardrails, restrictions, research and development practices, resilience measures, stress-testing, civil recourse, compensation schemes, insurance. Researchers and governments should look to other domains where risk is considered high - such as cybersecurity, aviation, chemicals and pollutants, and vaccine injury - for best practices on policy approaches.
Preventing AI from creating biochemical threats
Scientists with technical knowledge are likely best placed to assess the risk of dual-use technologies, but part of the work must be done by governments and policy makers, according to a short paper by researchers who used AI to design new chemical molecules. The experiment demonstrated the speed and relative ease with which AI software - based on open-source tools and data sets from the public domain - could be misused to create existing and novel potential biochemical threats. Governments, policymakers and regulatory bodies focused on international security policy, such as the Organization for the Prohibition of Chemical Weapons and the Australia Group, may decide to regulate how such AI technologies can be used and accessed, in order to prevent the design or development of new biochemical threats that circumvent current controls and access restrictions. (January 2023)
Policy comment: With the scientists themselves the first layer of managing risk, governments can use their convening power to bring the scientific community together to help inform them of their responsibilities and foster knowledge-sharing across domains. The paper’s researchers admit to embarking on the experiment naively and not consulting any ethical guidelines - and that it served as ‘a wake-up call’. Such lessons should not be learnt in the aftermath of a dangerous experiment. Government-led conferences, for example, could bring AI researchers and developers together with other fields, such as biological and chemical engineering, to discuss the dual-use aspects of AI and encourage science-led approaches to risk management, such as ethical guidelines, training, encrypted data, control of access to data and models, and publishing disclosures and restrictions.
Defining existential risk in legislation
Laypeople perceive the meaning of the term ‘existential risk’ as narrower and more severe than other related terms, such as ‘global catastrophic risk’ and ‘extreme risk’, according to a draft paper by the Legal Priorities Institute. As a result, which terms lawmakers use in proposed law will dictate how it is interpreted by judges, who tend to interpret words in a law according to how they are ordinarily interpreted by laypeople. A lawmaker intending to cover risks with wider bands of probability and lives endangered might want to avoid drafting the law exclusively around ‘existential risk’. (December 2022)
Policy comment: As existential and global catastrophic risk starts to enter policy documentation, including legislation, policy-makers and advocates should consider which terms they use and how they decide to define them. The term must be clearly defined terms to ensure that judges and officials can practically interpret the policy intention. Normative or value-laden aspects that arise in some definitions of existential risk, such as destruction of ‘human flourishing’ or ‘long-term potential’, might open the policy up to increased politicization and reduce a more clinical approach to risk analysis and management.