Global Shield Briefing (24 November 2025)
Resilience in NATO, AI risk in Australia, and investment in AI
The latest policy, research and news on global catastrophic risk (GCR).
Our view – unsurprisingly – is that the world is becoming increasingly risky. The risk from climate change, pandemics, weapons of mass destruction, emerging technologies and mother nature is not reducing. Mostly because the political, societal, economic, technological and environmental drivers underpinning them are not being ameliorated. And countries are not taking enough action to get ready.
So let’s take a different angle, and conduct a simple thought experiment: if we manage to avoid a catastrophe, how did we do so? Perhaps, we are simply lucky. The risk keeps growing because we don’t address it, but we never quite tip over the precipice. Perhaps, we are plucky. We summon the smarts and courage to reduce the risk to a manageable level. Or, perhaps we come so close to a truly catastrophic event that we are forced to get our act together. Luck it, pluck it or duck it. Where do you land?
Building resilience among NATO Allies

On 25 June 2025, NATO Heads of State and Government participating in the meeting of the North Atlantic Council in The Hague issued a declaration reaffirming their collective commitment to NATO and establishing a new 5 percent of GDP for defense spending commitment. As part of this commitment, the declaration states that “Allies will account for up to 1.5% of GDP annually to inter alia protect our critical infrastructure, defend our networks, ensure our civil preparedness and resilience, unleash innovation, and strengthen our defence industrial base.” Colloquially, this has become known as the 3.5+1.5 agreement, where the traditional military spending previously targeted at 2 percent has increased to 3.5 percent of GDP, and a new related category of spending to fulfil other NATO commitments on resilience now has an official 1.5 percent GDP target.
However, according to non-resident fellows of the Atlantic Council: “The problem is simple: No one knows exactly what counts toward that 1.5%, and the first progress check slated in the summit declaration is set for 2029. The declaration text provided no definitions, no annex of eligible categories, no oversight mechanism, and no reporting standards.”
Global Shield is seeking a Director of NATO Policy to help turn this pledge into reality. This role will seek to ensure this commitment delivers, such that NATO Allies ultimately make wise investments that improve their resilience to the wide array of 21st-century security threats. Ultimately, the efforts of the 29 NATO Allies to deliver on the 1.5 percent commitment will help prepare everyone, not just NATO, for global catastrophic risk, including by supporting civilian health systems, critical infrastructure, food and water resources, civil communication and transportation systems, and continuity of government (as stipulated in the seven baseline requirements for resilience). We’re looking for someone with deep NATO policy experience, strategic insight, and a passion for strengthening collective resilience. Read the full job description here, and if you are interested or would like to recommend someone for the position, please reach out.
Briefing Australian policymakers on AI risk

Over the past month, Global Shield Australia briefed Members of Parliament, Senators, policy officials, and industry partners on the need to urgently address the risk posed by the misuse of artificial intelligence (AI).
On 5 November, with partners Good Ancestors and CivAI, we demonstrated how public AI models can create deepfakes, supercharge phishing emails, and guide users through the steps necessary to make a bioweapon. Beyond illustrating the dangers of misused AI, Global Shield highlighted steps Australia can take immediately to reduce the risk of bad actors misusing AI. These include establishing monitoring and reporting of AI incidents to enable tracking and response to AI-related harms across the entire economy; and establishing specific security standards and obligations to prevent the misuse of advanced and high-risk AI models and applications by rogue actors.
Global Shield Australia also gave evidence to the Joint Standing Committee on Electoral Matters, as part of their inquiry into the 2025 elections. At the hearing, Director of Global Shield Australia, Devon Whittle, provided the committee important insights into the potential impact of AI models on disinformation and misinformation during electoral processes. He stated that “The immediate threats — deepfakes, disinformation, and automated influence campaigns — are now familiar. But the deeper concern is how AI systems might subtly shape political views, often without users even realising this is occurring.”
Devon also presented at the Safeguarding Australia Summit on the threat posed by AI on cyber capabilities and critical infrastructure. He noted that: “There are a limited number of foundation AI models that are widely used to power a variety of tools and applications. As a result, failures at the model level can rapidly cascade across sectors.”
Navigating the political economy of AI investment
There have been major developments in artificial intelligence over the last month. Other newsletters, such as from The Centre for Security and Emerging Technology, Transformer News and Concordia, are better positioned to provide news and insights on AI progress.
But one topic has dominated AI and financial headlines recently: whether AI investment is a bubble. Some figures demonstrate the large quantities of funding going towards AI companies and the infrastructure required to support them. Venture capital firms have put $161 billion into AI startups this year, according to the Financial Times. Ten of them, including OpenAI and Anthropic, now have a collective valuation of $1 trillion. According to Reuters, about 46 percent of global venture funding for the third quarter of 2025 went towards funding AI companies. Meanwhile, the AI data-centers to be built in 2025 will suffer $40 billion of annual depreciation, doubling the $15-20 billion of revenue they are estimated to generate. Consulting firm, Bain, estimates that, by 2030, AI companies will need $US2 trillion in combined annual revenue to fund the computing power needed to meet projected demand, but their revenue is likely to be around $1.2 trillion.
Policy comment: Global Shield has no position on if there is an “AI bubble”. Regardless of whether it is a bubble, the large capital investment in AI and AI-related infrastructure could have implications for global catastrophic risk. Such large economic investment drives a large demand for return, creating additional competitive pressures between AI companies, and reducing incentives to install safety measures that might be perceived to slow innovation. Meanwhile, policymakers will anticipate, correctly or not, that such capital investments in AI will yield major economic and productivity benefits, potentially shaping the economic policy of their countries. AI companies will also have more funding to put towards lobbying governments around the world. This nexus between AI, finance and politics demonstrates that GCR reduction is not simply a function of technical capabilities or safety measures. As with climate change and nuclear weapons, the political economy of AI is becoming a key driver of the nature of any underlying global catastrophic risk from AI development, whether or not AI itself presents the technical risk that most AI scientists argue it does.
This briefing is a product of Global Shield, an international advocacy organization dedicated to reducing global catastrophic risk of all hazards. With each briefing, we aim to build the most knowledgeable audience in the world when it comes to reducing global catastrophic risk. We want to show that action is not only needed, it’s possible. Help us build this community of motivated individuals, researchers, advocates and policymakers by sharing this briefing with your networks.

