GCR Policy Newsletter (19 September 2022)
AI regulation, space, supervolcanoes and future generations
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“When it comes to climate change, researchers are calling for more research into what happens if the world passes catastrophic global warming thresholds…Led by a team at the Centre for the Study of Existential Risk, the new paper calls looks at why ‘catastrophe’ must first be defined, and why digging into the interactions can help build our understanding of climate-related risk management.” Should we be studying what a human extinction event looks like?
“The fact is, given enough risks, and enough time, ruin is mathematically inevitable…The wolf didn’t come today…This doesn’t mean those who argue that the wolf exists and will arrive with some probability are alarmist fools…These become the ultimate in neglected problems.” When The Wolf Doesn't Come
“We have to reach a consensus on whether human-level AI indeed poses an existential threat to humanity...The fact that we don't know yet what manner of regulation would effectively reduce risk should not be a reason for regulators to not address the issue — but rather a reason to develop effective regulation with the highest priority.” Human-level AI is a giant risk. Why are we entrusting its development to tech CEOs?
“It’s unclear humanity will ever be prepared for superintelligence, but we’re certainly not ready now. With all our global instability and still-nascent grasp on tech, adding in ASI would be lighting a match next to a fireworks factory. Research on artificial intelligence must slow down, or even pause. And if researchers won’t make this decision, governments should make it for them.” How AI could accidentally extinguish humankind
“To [longtermists], risks that are most important are existential risks: the threats that don’t just make people worse off but could wipe out humanity entirely. Because they assign future people as much moral value as present people, they’re especially focused on staving off risks that could erase the chance for those future people to exist.” Effective altruism’s most controversial idea
“[On Monday, September 26 at about 7.14pm US time], NASA will deliberately crash a spacecraft into an asteroid in a $US330 million attempt to change the asteroid's course. It's the world's first full-scale mission to test technology for defending Earth against potential asteroid collisions.” NASA will crash a $330 million spacecraft into an asteroid. Here's why

Latest policy-relevant research
Regulating AI
The EU AI Act is likely to have significant implications for other jurisdictions through both a de facto and de jure Brussels Effect, according to a GovAI report by Charlotte Siegmann and Markus Anderljung. De facto regulatory diffusion - where non-EU countries are incentivised to changes products offered - is particularly likely in higher-risk use cases of AI such as employment decisions, medical devices and more general AI systems. This is partly owing to the fact these AI products are not highly regionalized and demand is inelastic. A de jure Brussels Effect - where the EU’s decisions influences regulation adopted by other jurisdictions - could develop if US states adopt stringent AI legislation based on the EU AI Act. (16 August 2022)
Policy comment: The EU AI Act could set the benchmark for other countries’ AI regulations. Policymakers around the world face the challenge of developing legislation that captures highly dangerous AI misalignment within its risk assessment and categorisation. It will also be difficult to design institutions that are nimble enough to adapt as AI capability and standards rapidly evolve. Early regulatory design might be particularly important due to path dependency. But, the US and China, which dominate the AI industry, will massively shape global norms and standards around AI and AI safety. EU regulation is unlikely to shape their decisions on strategically significant use-cases of AI.
Insurance has a vital role to play in regulating emerging technologies such as AI, according to an academic paper in the Harvard Journal of Law & Technology. Insurance offers a hedging tool to deal with the many risks associated with AI, and it translates them into a manageable scope. By doing so, insurance facilitates the adoption of AI into commercial markets. Given the unknown potential risks that AI entities might inflict upon their users and third parties, the insurance policy premium offered to manufacturers purchasing liability insurance is bound to be high. Even in the case of strong, general or superintelligent AI, insurance could have an important role. (August 2022)
Policy comment: The article’s assessment that an insurance market could exist for a singularity event is highly speculative and impossible to verify. However, a functional insurance market for AI risk is critical to nearer-term AI safety efforts. Policy-makers should consider whether to enforce mandatory insurance for particularly dangerous but essential AI activities. A key challenge would be determining who should be responsible for purchasing an insurance policy - the consumer or the manufacturer. Further work is needed by legal experts, insurers and AI manufacturers on how to develop an insurance market for AI risk.
Bracing for space risk
Continued space development may increase, rather than decrease, overall existential risk, according to an academic article examining the risks resulting from outer space activities. Addressing these shortcomings should take priority over the competing commercial, scientific, and geopolitical interests that currently dominate in space policy. Sensible changes, including shifting space into a closed-access commons as envisioned by the 1979 Moon Treaty, might help in achieving existential security. (7 August 2022)
Policy comment: Policy-makers should be aware of how activity in space could increase global catastrophic risk. The author provides ten pathways for increased risk caused by space development. The most plausible scenarios were asteroid deflection capability being used nefariously, potentially harmful extraterrestrial organisms transferred into Earth’s biosphere, and space as a driver for military tensions and conflict. The paper’s proposed solution - to make space and its resources a closed-access commons, accessible only by orderly international agreement - is extremely unlikely in the current geopolitical context. Middle powers, particularly those with space agencies (such as Italy, Israel and Australia), could play an important role in raising major risks posed by space development onto the agenda, potentially through the UN Office for Outer Space Affairs.
We need to prepare for the risk of an extraterrestrial body striking earth despite the probability of it leading to a catastrophe being low compared to other natural disasters, according to book chapter on ‘Extraterrestrial Hazards’. The physical shock from a celestial body could cause earthquakes or lead to dust rising to the atmosphere that blocks the sun. A large-enough impact could change Earth’s mass, rotation speed, orbit or direction of rotation axis, potentially leading to changes in climate and heat balance, ocean water levels and seawater movements. Without the technological knowledge to prevent such an event, the only thing we can do to mitigate this risk is to prepare adequately. (19 August 2022)
Policy comment: We currently lack detailed know-how to properly avert an impending impact of a near-earth object, and so preparation could be a key method for reducing the risk. The author suggests developing warning systems, medical facilities with enough supplies, and plans for evacuation and recovery operations. But NASA is leading around asteroid detection, and the findings of its Double Asteroid Redirection Test (DART) project will increase our capability around preventing impact. Though, as the previous article suggests, there are risks in developing this capability. It is not clear how much resources are required for preparation as opposed to prevention. Further work on the cost-benefit of these two risk reduction methods would help policy-makers better prioritise their efforts.
Recognising the risk of supervolcanoes
Compared to the high levels of international cooperation and investment towards safeguarding humanity from natural risks, little effort has been expended towards protecting humanity from large volcanic eruptions, according to Michael Cassidy and Lara Mani’s commentary in Nature. The Hunga Tonga–Hunga Ha‘apai eruption in early 2022, a magnitude 5 eruption, cost Tonga around 18.5% of its GDP. The chance of a magnitude 7-or-higher eruption occurring this century is roughly one-in-six. The last eruption of this scale occurred in 1805 in Indonesia and killed about 100,000 people directly. It also reduced global temperatures by 1 degree Celsius, leading to a global reduction in agricultural yields that subsequently caused famine and a rise in epidemics. (18 August 2022)
Policy comment: Further action is needed to reduce the potential severity of loss following an eruption. The authors recommend more geological surveys to locate and assess volcano hazards and further efforts to continuously monitor volcanic activity via satellite and radar. In many regions threatened by volcanoes, evacuation, supply chain, and infrastructural preparedness is severely lacking. Support and international cooperation from major countries, particularly the US, could greatly assist smaller and more vulnerable states. Reducing the likelihood of a supervolcanic eruption is also a potential avenue for action. Volcanic geoengineering could, in theory, decrease the damage incurred by volcanic eruptions or prevent an eruption entirely. But further research on its safety and feasibility is still needed.
Deciding who should represent future generations
Future generations should be represented in climate change decisions, according to an academic article by two academics from the University of Warwick. A legitimate representative of future generations would be a person willing and able to represent their interests. These representatives must have direct experience in dealing with the adverse effects of climate change, since psychological evidence indicates that shared experiences elicit empathy. (1 August 2022)
Policy comment: How to account for the interests of future generations is a question that spans beyond the issue of climate change. The authors rely too heavily on finding the right person to represent future generations, rather than designing institutions to incentivize long-term planning. Without the proper institutional frameworks, these representatives will continue to face the political blockers for long-term policy thinking. Indeed, governments already have a number of built-in mechanisms to prepare for the long-term future, including legislatively mandated representation of future generations. However, their focus probably remains existing policy priorities, such infrastructure, healthcare and education, rather than extreme risks. Policymakers could more explicitly assess how institutional design affects global catastrophic risks and build mechanisms that more directly consider the risks that would damage the prospects of future generations.
Policy database updates
The GCR Policy team continues to collect policy ideas put forward by the field of existential and global catastrophic risk studies. The database currently contains over 800 ideas, of which over 300 are listed publicly here. We will continue to update the database with the latest research and policy ideas. Contact us for more information or access to the full database.
This newsletter is prepared by the team of www.GCRpolicy.com. Subscribe for twice-monthly updates on the latest policy news and research on global catastrophic risk.