GCR Policy Newsletter (22 January 2023)
Three lines of defense for AI risk, and the integration of AI into nuclear systems
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“‘I worry very much about how we can be sure that some bad actor, as it were, doesn’t secretly develop some very dangerous pathogen and release it,’ he said. ‘It’s unlikely that many people wish to do this. But when the release of a pathogen could lead to a global pandemic, one such person is too many.’” Martin Rees explains how science might save us (Bulletin of the Atomic Scientists)
“We cannot even think about making the world a better place through policy if we’re all… dead. An X-risk will destroy not just our economy, but might also mean the end of the human race as we know it. This sounds bad to me! What kind of policies might be helpful here? The Centre for Long Term Resilience (CLTR) has an idea which includes the implementation of a government Chief Risk Officer (CRO). A ‘three lines of defence’ model will introduce less siloed risk management with clearer accountabilities across government.” Why existential risks are really really bad (Adam Smith Institute)
Latest policy-relevant research
Managing risk within AI companies
Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks for economic, legal, and ethical reasons, but it is not always clear who is responsible for AI risk management, according to AI researcher Jonas Schuett. The Three Lines of Defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. (16 December 2022)
Policy comment: The Three Lines of Defense model might provide AI companies with a better risk management structure than they are currently employing. Such voluntary adoption should be encouraged. But any government requirement or guidance for AI companies to implement such a model must consider the challenges that it has faced in other industries. In the banking industry, for example, the model has not helped avoid large financial losses and near-bankruptcies. This structural approach to risk management does not necessarily avoid fundamental problems, such as unclear responsibilities or poor risk culture. In absence or anticipation of a strong regulatory approach, governments might wish to provide AI companies with a range of tools and resources for improving their risk management processes.
Integrating AI into nuclear systems
The increasing autonomy of nuclear command and control systems stemming from their integration with artificial intelligence (AI) stands to have a strategic level of impact that could either increase nuclear stability or escalate the risk of nuclear use, according to a Summer Fellow of the Cambridge Existential Risks Initiative. Inherent technical flaws within current and near-future machine learning (ML) systems, combined with an evolving human-machine psychological relationship, work to increase nuclear risk by enabling poor judgment and could result in the use of nuclear weapons inadvertently or erroneously. This problem does not have a solution; rather, it represents a shift in the paradigm behind nuclear decision making and it demands a change in our reasoning, behavior, and systems to ensure that we can reap the benefits of automation and machine learning without advancing nuclear instability. (9 November 2022)
Policy comment: The integration of AI and machine learning into defense systems, particularly in nuclear command and control systems, might require a fundamental reimagination of nuclear stability and deterrence arrangements. Such arrangement are already at risk of faltering where they exist, and, in other situations, such as between the US and China, they barely exist at all. Poor or rushed integration of AI with nuclear systems could exacerbate nuclear tensions. As the author suggests, nuclear armed states should create international norms around appropriate AI integration and reduce uncertainty between nations on acceptable military use of AI. Fortunately, there remains time - probably over a decade - before we reach a tipping point. Governments, both nuclear and non-nuclear armed, must immediately begin investigating how AI will shape nuclear stability and deterrence and taking confidence building measures.