Global Shield Newsletter (9 November 2023)
A deep-dive into convergent risk and how to tackle it
This twice-monthly newsletter highlights the latest policy, research and news on global catastrophic risk (GCR).
This edition takes us out of our normally scheduled program to dive deeper into convergent risk. When two threats collide, it presents a challenging research topic, a knotty policy problem and an opportunity for action.
Reducing convergent risk

At Global Shield, we promote an all-hazard approach to reducing global catastrophic risk. Dealing with specific threats is important. But we believe a holistic approach to reducing GCR is more efficient and effective.
Having said that, there is a type of threat node that sits in between an all-hazards and a threat-specific approach, one that neither captures neatly.
Convergent risk
When two, or more, threats interact, a mutant emerges. It potentially narrows or accelerates the path to catastrophe, with one threat exacerbating the risk from another. Perhaps it’s sequential – where one threat occurring makes another threat more likely or impactful, a risk cascade. Or it changes the nature of the risk entirely. Like Godzilla, something new has arisen that neither threat lens quite brings into focus.
The recent discussion and policy efforts towards the intersection of artificial intelligence and biological risk is the clearest example. AI capabilities, such as machine learning and large-language models, could be used to engineer dangerous pathogens more quickly, easily, cheaply and effectively.
Just in the past couple weeks, this issue has been on high alert. On 30 October, The Nuclear Threat Initiative released a detailed report on the topic, conveniently called “The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe.”
The same day, the Biden Administration released an Executive Order on AI, which tasked the national security community to assess how AI could increase biological risk. It also required increased biosecurity measures due to the capability enhancement of AI on DNA synthesis.
However, AI–bio is just one convergence.
In the GCR space, a number of other convergences are noteworthy. AI’s integration into weapons systems – nuclear, chemical, cyber and autonomous – will each have their own unique combinations. And it could empower authoritarians to track, monitor and suppress the democratic impulses of their peoples.
Climate change, apart from being a catastrophic threat in and of itself, could exacerbate other threats. Climate change dials up the risk of infectious diseases, as does broader ecological damage. Food security, crucial in a catastrophe, will be under severe pressure from climate change. And the climate is a key element across nine planetary boundaries, six of which are already in critical condition.
A more unusual, though long-standing, convergence is near-Earth objects and biological risk. The concern – called back contamination – is that spacecraft returning from planets or asteroids could accidentally carry contaminants for which we have no resistance.
Why convergence is important
At first blush, convergent risk might seem a bit absurd. One unlikely threat meets another unlikely threat – doesn’t that simply mean it’s an even more unlikely outcome?
But it’s an area worthy of exploration for a few reasons.
First, convergence shapes prioritization. A convergent risk might be a particularly harmful or direct form of global catastrophe. And it can change the risk assessment for specific threats. For example, reducing the risk of a global pandemic or nuclear war might now demand increasing focus due to the convergence with AI.
Second, convergence encourages multidisciplinary engagement. Each domain might not fully grasp the various intersections or evolving threat landscape. For example, climate change might increase the geographic range of disease-carrying insects like mosquitos, meaning policy advice requires expertise from biology, ecology, climate change, entomology and public health. So convergent risk could fall through disciplinary and policy gaps. Bringing attention to convergence attracts, like a magnet, a range of experts to evaluate and reduce the risk.
Third, convergence promotes a more complete policy discussion. Dealing with a convergent risk adds a different dimension to risk reduction efforts. Take the climate–food intersection. Climate change policy focuses on reducing greenhouse gas emissions, and food security aims to feed everyone after a catastrophe. The convergent risk requires a broader view: building food systems that are resilient to disruptions, disasters and droughts, as well as reduced sunlight scenarios.
The policy challenge, and opportunity
Convergent risk presents a knotty policy challenge. Tackling it requires four angles: reducing threat A, reducing threat B, targeting the intersection itself, and an all-hazards approach. All told, the range of policy options for reducing the risk from these angles can become overwhelming. Policy researchers, advocates and practitioners should consider the full range, but conduct analysis that compares the effectiveness of each option. Attempts to reduce convergent risk should not default to simply directly addressing the convergence. Or, if there are no policy responses for the convergence, it might signal a lack of multidisciplinary engagement. Developing a policy strategy would probably require a balance.
The opportunity of convergent risk is that it highlights a particularly potent vector, one that the public and policymakers can grapple with. For example, while AI risk might seem nebulous in the broad, its intersection with bioengineering or nuclear weapons makes stark the risk. While climate change policy faces hurdles, its intersection with food security opens an alternate path. There is also an opportunity to apply all-hazard approaches – such as monitoring and warning, science and technical capability, and risk governance – to better manage risk as a whole. Ultimately, convergence can be a leverage point for new and targeted action.
New report from Global Shield’s Rumtin Sepasspour
Rumtin, our Director of Policy, has a new report titled, “All-Hazards Policy for Global Catastrophic Risk.” A high-level explainer on this topic was covered in an earlier edition of this newsletter.
The report provides the first in-depth study of all-hazards approaches for GCR, including a framework for all-hazards policy. For those interested in policy efforts to reduce global catastrophic risk, this report should be on your reading list!
Reach out on rumtin.sepasspour [@] globalshieldpolicy.org if you have any questions or comments on this report.