GCR Policy Newsletter (24 July 2023)
UN Security Council discuss AI, Netflix and Future of Life Institute release films on military applications of AI, and US legislators debate bioengineering
This twice-monthly newsletter highlights the latest policy, research and news on global catastrophic risk (GCR).
Managing the implications of artificial intelligence on peace and security
On 18 July, the United Nations Security Council (UNSC) has held its first discussion on artificial intelligence (AI) and the risks and opportunities for global peace and security. The UK’s Foreign Secretary, who chaired the meeting, noted that “AI could enhance or disrupt global strategic stability” and that “pioneering AI companies will…need to work with us so we can capture the gains and minimise the risks to humanity.” The Secretary-General attended and noted that “the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming.”
A number of UNSC member states raised concerns about catastrophic AI risk, particularly integration with weapons of mass destruction. The misuse and abuse of AI, such as on mis- and dis-information, and from non-state actors, especially terrorists, was particularly salient across the various statements. Most states mentioned, and even supported, some form of international efforts to manage the impacts of AI. But there remain significant challenges in establishing global governance arrangements.
AI governance must deal with rapid technological development, uncertainty about future impacts, a range of potential harms and risks, and the unequal distribution of benefits. At this early stage, countries have very nascent views on what should be nationally governed, let alone at the international level. AI is also having global implications just as the multilateral system has begun to hollow out from the inside and fray at the edges. Geopolitical competition, particularly between the US and China, is exacerbating these challenges, and will be especially pertinent for leadership over frontier technologies. Policy-making and advocacy efforts towards global AI governance will need to grapple with these realities.
Recent relevant research:
“International institutions may have an important role to play in ensuring advanced AI systems benefit humanity,” according to AI researchers from Deepmind, OpenAI and the universities of Oxford, Harvard, Stanford, Columbia and Toronto.
“The most dangerous type of AI - the foundation models - are the easiest to regulate” because “creating them requires huge agglomerations of microchips,” which are obtainable only by governments and major AI labs, according to Avital Balwit.
The UK-hosted global summit on AI safety “could produce a range of valuable outcomes,” and “may also be a critical and fleeting opportunity to bring China into global AI governance,” according to a small expert workshop convened by the Centre for the Governance of AI.
Restricting AI for military use
The Netflix documentary, Unknown: Killer Robots, looks at the military applications of artificial intelligence. It highlights the challenges and risks arising from the weaponization of autonomous systems, and global race to develop these capabilities. The Future of Life Institute’s (FLI) new fictional short film shows how integrating AI into weapons systems, and ceding too much control, could lead to accidental conflict escalation and a global nuclear catastrophe. In a follow-up article, FLI suggests setting hard limits, strengthening trust and transparency and ensuring human control.
The integration of AI into defense systems could fundamentally change the nature of warfare. Military use of AI could disrupt nuclear stability and deterrence arrangements. Although AI remains out of nuclear command and control systems, its integration in intelligence, cyber and autonomous systems capabilities could still exacerbate nuclear risk. And there remains no clear intention for AI to be kept out of nuclear command and control across nuclear weapons states. International nuclear weapons arrangements are already at risk of faltering where they exist, and, in other situations, such as between the US and China, they barely exist at all. So poor or rushed integration of AI with military capabilities could exacerbate nuclear tensions.
This year, the US Congress has considered the issue of the use of AI in nuclear command and control. However, a bipartisan amendment in the House of Representatives did not get a vote before the House passed its version of the National Defense Authorization Act. At the time of this writing, a similar amendment has not been taken up in the Senate’s debate on the NDAA either. These amendments are based on the bipartisan, bicameral Block Nuclear Launch by Autonomous AI Act.
Reducing the pandemic risk from engineered pathogens
The US’s Pandemic and All-Hazards Preparedness Act (PAPHA) is being considered for reauthorization in Congress. The draft bill as it currently stands requires a study that assesses the impact of AI on health security, including from chemical, biological, radiological and nuclear threats. Senator Markey has also introduced a bill that would require gene synthesis providers to enact screening protocols, which aims to protect the public from dangerous synthetic DNA. The Senate Committee hearing on 20 July highlighted the risks from bioengineering and gain-of-function research, with Senator Paul referring to it as "catastrophic and potentially civilization-changing".
Policymakers must address the risk from bioengineering, which provides the ability to synthesize or design dangerous pathogens. The technology enables malicious actors to develop bioweapons more easily than ever before by overcoming major barriers and challenges previously faced with weaponisation. And AI could lower some of these barriers. For example, Kevin Esvelt, a biosecurity expert at the Massachusetts Institute of Technology, recently tasked non-scientist students to create a dangerous virus using chatbots. Within one hour, the chatbot had suggested four potential pandemic pathogens along with the process to develop them. Potential policy actions include a moratorium on the funding gain-of-function research and increasing oversight over dual-use research of concern.