GCR Policy Newsletter (3 April 2023)
Reducing AI risk outside liberal democracies and building food system resilience for a nuclear winter
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.

GCR in the media
“So AI threatens to join existing catastrophic risks to humanity, things like global nuclear war or bioengineered pandemics. But there’s a difference. While there’s no way to uninvent the nuclear bomb or the genetic engineering tools that can juice pathogens, catastrophic AI has yet to be created, meaning it’s one type of doom we have the ability to preemptively stop.” The case for slowing down AI (Vox)
“Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.” Pausing AI Developments Isn't Enough. We Need to Shut it All Down (Time)
“In just a few months, the novelty of ChatGPT has given way to utter mania. Suddenly, AI is everywhere. Is this the beginning of a new misinformation crisis? A new intellectual-property crisis? The end of the college essay? Of white-collar work? Some worry, as [Arthur] Compton did 80 years ago, for the very future of humanity, and have advocated pausing or slowing down AI development; others say it’s already too late. In the face of such excitement and uncertainty and fear, the best one can do is try to find a good analogy—some way to make this unfamiliar new technology a little more familiar. AI is fire. AI is steroids. AI is an alien toddler.” AI Is Like … Nuclear Weapons? (The Atlantic)
“Opportunities for doomsday abound. Humans could be wiped out by a catastrophic asteroid strike, commit self-destruction with worldwide nuclear war or succumb to the ravages caused by the climate emergency. But humans are a hardy bunch, so the most likely scenario involves a combination of catastrophes that could wipe us out completely.” Will Humans Ever Go Extinct? (Scientific American)
Latest policy-relevant research
Reducing AI risk outside liberal democracies
Aside from misaligned AGI, AGI systems that are intent-aligned - they always try to do what their operators want them to do - would also create catastrophic risks, mainly due to the power that they concentrate on their operators, such as militaries or totalitarian states, according to an academic paper in AI and Ethics. Around 72 per cent of AI ethics documents have been released in the Global North compared to the Global South, according to a paper that traces and investigates a dataset of 100 documents on AI ethics and principles released between 2015 and 2022. Numerous parties are calling for ‘the democratisation of AI’, but the phrase is used to refer to a variety of goals, the pursuit of which sometimes conflict, according to a pre-print paper by GovAI researchers. (March 2023)
Policy comment: While AI safety research and advocacy tends to focus on liberal democracies, especially those that are English-speaking, a major risk of advanced AI systems arises from their use by non-democratic states and developing economies. These countries will seek advanced capabilities to narrow the gap with - and ideally surpass - the military, economic and technological superiority of liberal democracies and advanced economies. Liberal democracies must not only seek to increase AI safety efforts for its own benefit, but because it sends a positive signal to rivals around the need for safe and measured progress in AI. It would also serve as a key starting point for negotiations on global rules and standards on AI development. Meanwhile, researchers and advocates should devote more resources to tracking and shaping AI policy efforts outside of the US, EU and UK.
Building food system resilience for a nuclear winter
At current production levels, frost resistant food crops could not feed all New Zealand citizens following a nuclear war, according to a study by New Zealand-based catastrophic risk researchers. The optimized combinations of frost resistant crops that were found to feed the entire population of New Zealand during various nuclear winter scenarios are, in descending order: wheat and carrots; sugar beet; oats; onions and carrots; cabbage and barley; canola and cabbage; linseed and parsnip; rye and lupins; swede and field beans; and cauliflower. In terms of current production levels of these frost resistant crops in New Zealand, there would be a 26 per cent shortfall for the ‘war without a nuclear winter’ scenario and a 71 per cent shortfall for the severe nuclear winter scenario. There is a need for the New Zealand Government to conduct a detailed pre-war analysis on how these shortfalls are best addressed. (14 March 2023)
Policy comment: Governments should conduct a thorough assessment of their food system vulnerabilities for a range of global agricultural productivity reduction scenarios, including those that arise from sunlight-blocking catastrophes. This assessment should inform preparedness measures, such as production of frost-resistant and alternative food, and stockpiling of food, seeds and agricultural inputs. It would also inform response measures, such as rationing, scaling up food production and providing financial support to consumers and food producers. Policy researchers and advocates should be prepared with recommendations that are relevant for multiple food system shock scenarios, including for those below catastrophic level, in order to provide policy-makers a range of possible options that might be more politically and financially feasible.