Global Shield Briefing (15 October 2025)
Futures, geoengineering, and three types of AI risk – and welcoming two new team members
The latest policy, research and news on global catastrophic risk (GCR).
“Prediction is very difficult, especially about the future.”
Attributed to physicist and Nobel laureate, Niels Bohr, as well as to baseball Hall of Fame player and manager, Yogi Berra (though first stated by Danish politician and a founder of Denmark’s social democracy, Karl Kristian Steincke), it captures the humility required when confronting uncertainty. Whether in politics, physics or baseball, prediction is bound to fail. The world is too complex and too unpredictable.
But deeply considering the future is not a futile exercise. Whatever answer one arrives at, the power lies in the path one took in getting there. The act of looking ahead disciplines the mind, illuminates blind spots, and tests our most core assumptions. It is a quiet rehearsal for the world that might be, and an invitation to build the world we want.
Building futures into policymaking

The European Commission (EC) has released its 2025 Strategic Foresight Report, the fifth in the series that started in 2020. The focus of the report was to set out a vision for a resilient EU 2040. It identified eight areas of action, including: harnessing the power of technology and research (including by shaping guardrails for high-impact technologies), strengthening long-term economic resilience, and strengthening democracy as a common good.
NATO’s Second Foresight Conference was held over 7-9 October, co-hosted by NATO Allied Command Transformation and the NATO Defence College. The event gathered senior officials, scholars and strategists to explore the challenges and opportunities shaping NATO’s long-term future. Day 2 focused on a “whole-of-society” approach to resilience, including societal readiness and investment in civil defense. Attendees participated in “Crisis Room 49”, an immersive exhibition that reimagines crisis response for future scenarios.
The Government of Finland has produced a 171-page “Report on the Future”. There were four scenarios covered in the report: “A world of cooperation”, “A world of tech giants”, “A confrontational world”, “A crumbling world”. The latter scenario described a chaotic and conflict-ridden world due to a major economic decline, trade wars, unfettered authoritarianism and irreversible global warming. The report also highlighted various wildcard scenarios, including “another is a new Ice Age in Europe, triggered by climate change and the collapse of the Gulf Stream ocean current.”
Policy comment: Futures and foresighting exercises can be an important mechanism to sensitize policymakers to risk and uncertainty. As decisionmakers’ apertures increasingly narrow, these efforts help broaden their lens to a wide array of possible future scenarios. The future could, of course, include a global catastrophe. The ability of futures work to shape policymaking is derived heavily from its integration with policymaking processes. For example, the futures question being explored should be guided by a senior policymaker or a relevant policy question. The topics addressed in the EC’s Strategic Foresight Reports are guided by the EU Parliament, and the Commission has a mandate to integrate foresight tools into policymaking. The Government of Finland report is delivered to its Parliament by the Prime Minister’s Office, which also coordinates the ministries’ joint foresight activities.
Ideally, a futures analysis process is conducted ahead of a major policy decision. It helps inform an understanding of the future on the policy (i.e., how the future will change the uses and utility of a policy), and the future of the policy (i.e., how the policy itself needs to evolve for the future). Including senior policymakers in the process is another way to better integrate futures and policymaking. Often, the value of the exercise is not the final report, but the process itself to get there. Getting senior ministers or officials in a room to speculate about the future will face resistance. But if the foresight exercise is conducted ahead of a highly consequential policy direction – such as a national security strategy or defense strategy – it will strengthen the policy decisions made as well as the commitment to the implementation of the policy.
Engineering an approach to geoengineering
The topic of geoengineering continues to heat up.
A Carnegie Endowment report looks at “Assessing Risks in the Era of Planetary Security”. The authors proposed a novel and holistic framework for geoengineering, which “poses three forms of global catastrophic risk”. First, a termination shock, whereby global temperatures increase rapidly if a deployed solution is stopped suddenly. Second, systemic destabilization, whereby geoengineering causes cascading failures in other societal and ecological systems. Third, the prospect of geoengineering solutions could delay meaningful emissions reductions and other climate change mitigations, leading to ‘overshoot’ of the 1.5 degree Celsius target.
In a new piece, The Council on Strategic Risks assesses that, “From a technical perspective, it is currently unlikely that a country could or would choose to weaponize solar geoengineering”. Rather, geoengineering fits within the broader security landscape due to its impact on geopolitics, on conflict or escalation miscalculations, or on destabilizing disinformation.
‘An article in Nature argues that the total cost of solar radiation management needs to account for non-technical costs or risk. For example, the authors consider geopolitical and political obstacles for agreeing or implementing a geoengineering solution. A delayed or mishandled effort could lead to the termination or overshoot described above.
The EC 2025 Strategic Foresight Report from above also comments on this topic: “Currently, there is no international framework to govern [geoengineering] research, testing or deployment. Still, several nations have the required capabilities and might test them, for example via stratospheric aerosol injection. Others, like the UK, are investing substantially in SRM [solar radiation management] research, thereby gaining knowledge and expertise as a basis for future evidence-based trade-offs and a role in international decision-making.”
Policy comment: Geoengineering is a useful representation of the problem of dealing with global catastrophic threats in siloes. As these pieces of research exemplify, geoengineering links to other global threats and challenges. It is a technical solution to, and potential exacerbator of, climate change. Like AI and bioengineering, it is an increasingly powerful technology with poor national and international governance. It links to the factors that make societies vulnerable to catastrophic risk as a whole, such as food and water system insecurity and mis- and dis-information. And like other catastrophic threats, the future of geoengineering risk is driven by a combination of scientific advancements, geopolitical tensions, rogue actors and multilateral breakdown. This complex policy challenge points to the need for governments to consider these issues holistically.
Given its complexity, exploring geoengineering governance should start small. A single country must take the lead on mapping the technical, societal, security, environmental and geopolitical challenges. Canada might be particularly well-placed. It hosts the NATO Climate Change and Security Centre of Excellence (CCASCOE). And Canada has a strategic interest in protecting the Arctic from climate change as well as geoengineering. According to the North American and Arctic Defence and Security Network, “As the subject matter leader, Canada could also act as a neutral middle-power, and take a leadership role on the international stage, including leading negotiations on limits to research and deployment.”
Recognizing the nexus of AI and existing threats
“The potential risks of AI involvement in the design of pathogen-based bioweapons are increasing, although experts differed in their estimations of how quickly”, according to a new RAND report based on inputs from both AI and biotechnology experts. The experts assess that a catastrophic scenario – where AI could go rogue and autonomously design radically novel and dangerous pathogens – remain implausible, at least through 2027. There was consensus around the potential policy options, including monitoring of technological capabilities, reinforcing traditional biosecurity safeguards, fostering a culture of responsible AI use in biological research, and implementing regulatory and institutional safeguards.
A team at Microsoft “worked with four commercial DNA synthesis companies to stress test and develop patches for screening methods to greatly improve their ability to identify sequences that should be restricted”, the results of which they have published in Science. The team used AI to redesign toxins in a way that let them slip past biosecurity screening software. Microsoft says it alerted the US government and software makers, who have already patched their systems.
Speaking at the United Nations General Assembly, President Trump stated that “despite that worldwide catastrophe [the COVID-19 pandemic], many countries are continuing extremely risky research into bio-weapons and man-made pathogens…To prevent potential disasters, I’m announcing today that my administration will lead an international effort to enforce biological weapons convention.” He suggested the development of an AI verification system, noting that “[AI] it could be one of the great things ever, but it also can be dangerous.”
Policy comment: As policymakers and advocates seek to manage AI risk, it is important to delineate between three general types of AI risk. First, AI exposes societies’ existing vulnerabilities, like mass economic dislocation or challenging information landscape. Second, advanced AI itself might pose an entirely new threat, as in a loss of control or “rogue AI” scenario. Third, AI enhances the capabilities of end-users to do harm across a very wide range of existing threat vectors. It is this third category that is receiving an increasing focus among AI safety advocates and national security practitioners because the consequences could be catastrophic.
Just about any harmful behaviour or action that previously relied on human intention and intelligence can be amplified with AI. Everyday harms, like online fraud and harassment, could be turbocharged. At the most catastrophic end, AI could be used in the development and deployment of biological, chemical, autonomous, cyber and nuclear weapons. The challenge for policymakers is how to address these three categories of AI risk in a holistic and comprehensive way while maximizing AI’s potential. All three categories are worthy of attention and policy efforts. But managing the nexus of AI and existing threats is potentially the most pressing.
Existing government assets and capabilities – like law enforcement, R&D, emergency response, security, defense, and intelligence – are already focused on the threat landscape. They must assess their ability to manage AI-exacerbated threats, especially as AI policy and regulations are still forming and maturing. Policymakers could direct their intelligence or technology assessment agencies to conduct a “net assessment” of AI risk. Like a “National Intelligence Estimate” process led by the US Office of National Intelligence, it would inform policy development to manage the three types of AI risk holistically.
Welcoming two new Global Shield members
Marvin Meintjies has joined as Brand and Communications Director. Helping to energize a range of stakeholders around our mission to shape policy and avert catastrophe, Marvin heads up public affairs, communications, and brand marketing for Global Shield.
With decades of international media experience as a journalist and editorial executive, Marvin has built his career by getting to the heart of the story. His wide-ranging experience includes leading commercial media titles, managing investigative units, and making impactful interventions as a foreign correspondent writing on international affairs. His deeply reported and researched humanitarian journalism for the United Nations helped to drive action in crisis response. He now leverages his expertise in communications, media, and stakeholder relations in the corporate affairs arena. Marvin is a passionate advocate for building a future in which humanity thrives.
We also welcome Maria Laura Starling. Laura leads Global Shield’s international growth by identifying promising countries for expansion, engaging with key partners and stakeholders, and ensuring the necessary legal, policy and operational foundations to establish new country offices.
Laura has built her career at the intersection of public policy and innovation in Brazil. She began as a Specialist in Public Policies and Government Management in Minas Gerais, Brazil’s second-largest state. From 2020 to 2021, she played a central role in the COVID-19 Emergency Situation Room at the State Secretary of Health, helping to coordinate the state’s pandemic response. She later became Director of Open Innovation and Technological Entrepreneurship for the Government of Minas Gerais, where she oversaw HubMG GOV, the largest open innovation program for public administration in Latin America. Laura also co-founded Instituto Aleias, a nonprofit dedicated to increasing the number of women in leadership positions in the Brazilian public sector.
This briefing is a product of Global Shield, an international advocacy organization dedicated to reducing global catastrophic risk of all hazards. With each briefing, we aim to build the most knowledgeable audience in the world when it comes to reducing global catastrophic risk. We want to show that action is not only needed, it’s possible. Help us build this community of motivated individuals, researchers, advocates and policymakers by sharing this briefing with your networks.