Global Shield Briefing (29 January 2025)
Global risk reports, emergency response and uncertainty of AI progress
The latest policy, research and news on global catastrophic risk (GCR).
Short-termism. An oft-cited foe when discussing how to influence policymakers on global catastrophic risk. Economic challenges. Electoral processes. Media cycles. Having policymakers zoom out and think about ‘the long term’ seems, to most people, to be a massive hurdle for change.
It’s a false dichotomy. Indeed, falling into this trap could perpetuate the sense from policymakers that global catastrophic risk is unlikely or futuristic. Global catastrophic risk is with us now. Two nuclear-armed states are in major regional conflicts – let alone the massive build-ups of nuclear weapons. The impact of climate change is already being felt. While COVID-19 is still with us, the US is facing an escalating bird flu crisis, and a concerning outbreak of the Marburg virus is being managed in Tanzania. AI has already approached human levels on a range of artistic and intellectual tasks, a feat that some experts previously thought would be decades away.
Nowadays, policymakers are often dealing with crisis upon crisis – geopolitical, economic, environmental, societal, political. The more pertinent challenge is how to focus policymakers’ attention on prevention and preparedness when they are perpetually in response and recovery mode. As we enter a new era of global catastrophic risk, it’s worth taking a fresh look at what it takes to convince policymakers.
Turning risk reports into action

The World Economic Forum has released its annual “Global Risks Report” based on inputs from over 900 global leaders across academia, business, government, international organizations and civil society. It found that 88 percent believed that the next two years would see at least a moderate risk of global catastrophe (up from 84 percent) and 92 percent when assessed over ten years (up from 91 percent). The three most likely global crises for 2025 were state-based armed conflict, extreme weather events and geoeconomic confrontation.
The Bulletin of Atomic Scientists has moved the Doomsday Clock from 90 seconds to 89 seconds to midnight. According to its statement, “Because the world is already perilously close to the precipice, a move of even a single second should be taken as an indication of extreme danger and an unmistakable warning that every second of delay in reversing course increases the probability of global disaster.”
Eurasia Group and Foreign Policy also released their own risk assessments for 2025, each with a list of ten. While most were geopolitical, those of relevance to global catastrophic risk included Eurasia Group identifying the possibility of a climate tipping point, Foreign Policy noting a potential breakdown in US-China relations, and both worried about the lack of AI governance.
Policy comment: These types of public assessments are generally useful for policymakers to sense-check the views of global and geopolitical risk experts against their own. However, they are very high-level, differ in methodologies and scopes, and target many audiences. They might not be instructive for immediate and precise policy change. So policymakers, from the head of government down to a policy officer, would need to translate the assessments’ findings for their own jurisdiction. To start with, these reports and publications might have identified a specific risk area, risk factor, trend or scenario that is not on the policymaker’s radar. Selecting one of these items, conducting a high-level review of its impact on their jurisdiction, and assessing the ways and the extent to which the nation is vulnerable would help identify policy and capability gaps. More broadly, governments need to conduct an internal analysis of global risk. National risk assessments are one method used by some, but very few, governments. Strategic intelligence and defence agencies also develop assessments of global issues, though these efforts are often not conducted consistently, publicly presented and whole-of-government exercises.
Also see:
An article by our Policy Director from last year on the purpose and limitations of the Doomsday Clock.
Reforming emergency response

The US has issued an Executive Order aimed at reforming the Federal Emergency Management Authority (FEMA), the leading, though not exclusive, disaster response and assistance agency in the US. The EO forms a council of agency heads and external experts to advise the President on improving FEMA’s ability to respond to disasters and support States in their disaster response. The Council is required to hold its first public meeting within three months and submit its report to the President by six months later.
Policy comment: The need to reform FEMA and the emergency management system has been a long-standing, bipartisan issue in the US. This reform could pave the way for improving how FEMA manages global catastrophic risk, while also empowering non-federal governments to better manage more localized disasters. In particular, the EO requires the sourcing of external analysis, debate and commentary on FEMA’s role and operations. Ultimately, modern hazards are far greater than when FEMA first began in 1979, and global catastrophic risk is a largely neglected catastrophic tail of the risk portfolio – despite all catastrophic response plans citing that FEMA would be on the frontlines of a global catastrophe. The council’s engagement with external experts is key. It will open the door to flagging the need for FEMA to prioritize global catastrophic risk. This EO is emblematic of the Trump Administration’s potential impact on global catastrophic risk reduction. President Trump is delivering on his broad campaign promise to reimagine how government functions for the national interest. Government reviews and reforms like these, even if not directly motivated by global catastrophic risk, create openings that typically occur only after a major catastrophe.
Seeking a deeper view on AI progress
A breakthrough in AI models by a Chinese company is testing assumptions around the gap between Western and Chinese AI companies. DeepSeek’s new model, r1, has outperformed leading models like OpenAI’s o1 and Anthropic’s Claude 3.5 in benchmarks while apparently using far fewer resources. DeepSeek also released the technical weights of their model, a practice of debate worldwide. Further, DeepSeek is garnering considerable praise for its engineering of this model, using techniques that may soon be replicated by other companies for more capability advancement. This release potentially demonstrates that Chinese companies can narrow the gap, or even supersede, their Western counterparts despite their limited access to high-end semiconductors and lesser computing power. Other AI experts have pushed back on this narrative, noting that export controls and computing power are still major constraints for the future of DeepSeek, whose success so far can be heavily explained by the easier task of replicating existing models.
In a 28 January interview with The Economist, Anthropic CEO’s Dario Amodei states that “2026-27 is the critical window [when] the models start getting better than humans at everything including AI design, including using AI to make better AI, including using AI to make all kinds of intelligence and defense technologies.” He goes on to state that, for the US to stay ahead of China, it must retain export controls, ensure energy provision to domestic AI companies, and continue testing and evaluation of models to prevent misuse.
Policy comment: Massive uncertainties remain in the future of AI progress. Experts differ considerably on both the technical and geopolitical implications of the DeepSeek release. It reinforces that no one should feel confident in any prediction about the societal, economic and national security implications of AI. Simply put, policymakers need to be ready for tremendous uncertainty, while attempting to reduce that uncertainty. Rapid technological advancement will create a fog of confusion ripe for policy error and geopolitical misunderstanding. Under these circumstances, strategic surprises are likely. Such surprises often proceed forceful policy reactions that can be as unpredictable and uncertain as the events themselves. Two classic examples – Sputnik and 9/11 – had long tails in their national policy responses and global ripple effects. Whatever surprise comes from AI, policymakers must become better prepared through several policy tools: technology assessments that consider faster than expected AI progress; emergency response plans updated for AI risk; diplomatic channels established for crisis; and non-proliferation safeguards established and tested.
Also see:
Deepseek’s performance playing Chess, which left much to be desired.
This briefing is a product of Global Shield, the world’s first and only advocacy organization dedicated to reducing global catastrophic risk of all hazards. With each briefing, we aim to build the most knowledgeable audience in the world when it comes to reducing global catastrophic risk. We want to show that action is not only needed, it’s possible. Help us build this community of motivated individuals, researchers, advocates and policymakers by sharing this briefing with your networks.