Global Shield Briefing (10 July 2024)
Catastrophic terrorism and the weaponization of artificial intelligence.
The latest policy, research and news on global catastrophic risk (GCR).
There’s no denying it: global catastrophe is a pretty dark topic. But we never look to be fatalistic or pessimistic. We do not wish to scaremonger or sensationalize. Quite the opposite. We want be sobre in the assessment and hopeful in the solutions. Today’s briefing, admittedly, takes us into some even darker territory. As the capacity to do harm grows in ease and scale, societies must be alert to those who are motivated by malice. Terrorists and extremists, in particular, have little qualms in inflicting their blows. Technology is said to be a double-edged sword. But the edge wielded by the nefarious can cut quicker and deeper. So let’s sharpen our edge, harden our shields and stand vigilant.
Thwarting catastrophic terrorism

Members of an expert committee of the US National Academies of Sciences, Engineering, and Medicine unanimously concluded that US efforts to counter nuclear or radiological terrorism are not keeping pace with the evolving threat landscape. The report, mandated by the US Congress, assessed that the probability of nuclear terrorism is low due to existing security and safety efforts and the challenges in fabricating an improvised nuclear device. But the challenge that the report raises is that the number and types of groups who may be motivated to use these weapons are probably growing. The report provides little assessment on how artificial intelligence could empower terrorists. Still, it notes that “The incorporation of digital technologies within nuclear facilities is creating new potential vulnerabilities and emerging technologies like artificial intelligence and drones are providing adversaries with dangerous new capabilities.”
A separate National Academies report on chemical terrorism assesses that “the Federal Bureau of Investigation (FBI) and partner law enforcement and IC have been effective in identifying and interdicting the majority of domestic terrorist attacks involving chemical materials.” However, there remain areas for improvement, including greater consideration of insider threats. It also assesses that “While artificial intelligence and machine learning can be used to predict new chemical structures, the feasibility of converting predicted structures to weaponized chemicals is not straightforward and is thus unlikely in the near term (~5 years).”
Policy comment: Terrorism is a risk factor across multiple global catastrophic threats. Just as the number and types of terrorist groups are increasing, especially home-grown terrorism, they are being increasingly empowered by technological progress. Direct attacks using weapons of mass destruction are one pathway. But terrorists do not necessarily need to develop or access weapons themselves. For example, a plausible future consideration is terrorists seeking to infiltrate nuclear weapon command, control and communication systems to launch weapons or trigger false alerts. If a terrorist group is intent on dooming humanity, they could look to other innovative ways to increase risk or sabotage collective efforts to reduce the risk, such as using AI-enabled cyberattacks or disinformation. GCR experts and policymakers must enhance policies for reducing terrorist capability and activity, including their access to catastrophic risk generators. Risk from terrorists might not be greater than from state-based activity. But this approach also helps reduce the threat from other actors, such as states or insiders, while being a tractable frame for advocacy.
Also see:
A 2023 study on existential terrorism by Zachary Kallenborn and Gary Ackerman, who outline three broad pathways to existential harm, and note that “terrorists could conceivably develop genetically engineered microbes, catalyse nuclear war or, in the future, utilise novel technologies like [artificial super intelligence] and nanorobotics to carry out existential attacks.”
The Biden Administration’s National Security Memorandum to Counter Weapons of Mass Destruction Terrorism and Advance Nuclear and Radioactive Material Security, released in March 2023.
Taking a national security approach to weaponized AI

A report by the Global AI Risks Initiative at the Centre for International Governance Innovation (CIGI) argues that “an effective, future-ready approach to international cooperation on AI governance is urgently required” due to the development of advanced systems and the global challenges they may pose. The authors propose a “Framework Convention on Global AI Challenges” to provide a flexible instrument to help accelerate international cooperation on the global governance of AI. The report identifies weaponization and loss of control as the two greatest threats from AI.
Germany’s Foreign Office recently held a conference on AI and weapons of mass destruction (WMD), with papers on AI’s impact on chemical weapons development, molecular design, biological weapons and nuclear risk. A paper submitted by the Policy & Research Director of the European Leadership Network outlined the three key impacts of AI on WMD: simplifying and accelerating development and production; lowering proliferation hurdles; and lowering threshold of use.
The US Department of Homeland Security released their report on risk from AI’s intersection with chemical, biological, radiological and nuclear (CBRN) threats (as directed by Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). It finds that, “As AI technologies advance, the lower barriers to entry for all actors across the sophistication spectrum may create novel risks to the homeland from malign actors’ enhanced ability to conceptualize and conduct CBRN attacks.”
Policy comment: The ability of malicious actors to weaponize AI capabilities to cause harm must be a greater priority for governments around the world. The use of AI to enable and escalate CBRN, cyber, disinformation and automated warfare is one of the greatest novel national security threats this decade. The threat will remain neither distant or improbable – it is a quickly emerging reality. Despite this, new legal and policy frameworks are not necessarily needed. Typical security apparatuses – defense, domestic security, intelligence, law enforcement and emergency management – are already positioned to handle national security threats. They are simply not prioritizing it as highly as the urgency or scale of the problem requires. So policymakers must direct their security establishment to prioritize managing risk of AI weaponization, including by adapting the existing approaches for managing CBRN risk to deal with the integration of AI. This direction should be supported by additional talent and funding for conducting intelligence collection, threat identification, interdiction, disruption, and prosecution. An ‘AI weaponization’ mission or taskforce reporting to the national security advisor would signal urgency and drive outcomes.
Also see:
An article last year by Global Shield’s Director of Policy on the global governance of AI. He argues for adapting and upgrading existing weapons conventions and treaties, such as the Non-Proliferation Treaty, the Chemical Weapons Convention, and the Biological Weapons Convention, to better account for the added capabilities provided by AI systems. These upgrades could be interpretative declarations or statements, guidance documentation and technical updates, which might also pave the way for more binding (but difficult to negotiate) amendments and protocols.
A new report by Brussels-based think tank the International Center for Future Generations (ICFG) on the policy implications of advanced AI for the European Union.
This briefing is a product of Global Shield, the world’s first and only advocacy organization dedicated to reducing global catastrophic risk of all hazards. With each briefing, we aim to build the most knowledgeable audience in the world when it comes to reducing global catastrophic risk. We want to show that action is not only needed, it’s possible. Help us build this community of motivated individuals, researchers, advocates and policymakers by sharing this briefing with your networks.