This twice-monthly newsletter highlights the latest research and news on global catastrophic risk (GCR). It looks at policy efforts around the world to reduce the risk and identifies insights from the research for policy-makers and advocates.

GCR in the media
“Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not ‘misused by evil people’, former Google CEO Eric Schmidt warned. ‘And existential risk is defined as many, many, many, many people harmed or killed,’ Schmidt said. The future of AI has been thrust into the center of conversations among technologists and policymakers grappling with what the technology looks like going forward and how it should be regulated.” A.I. poses existential risk of people being ‘harmed or killed,’ ex-Google CEO Eric Schmidt says (CNBC)
“Scientists have estimated that 1.67 million yet-to-be-discovered viruses exist in mammals and birds, and about half of them have the potential to spill to humans. As far back as 2018, the World Health Organization (WHO) gave this unknown future outbreak a placeholder name: Disease X. It represents the ‘knowledge that a serious international epidemic could be caused by a pathogen currently unknown to cause human disease,’ the WHO explained. One year after that designation, COVID-19 was identified as the first in the mysterious category that scientists had warned about. Today, more infectious outbreaks seem inevitable.” Disease X is coming, and with it the next global pandemic, scientists warn (National Post)
“Regulating such a fast-moving industry is likely to prove difficult, but certain principals can be enacted. Companies using large datasets to train their AI tools could be forced to share information with governmental agencies, for example. They could also be made to hire ‘red teams’ of outside experts to pretend to be malicious actors to simulate how the technology could be misused. People who are working on particularly sensitive technology could be required to sign agreements that they will not release it to particular groups or governments. There is also a question of liability. Ministers may soon have to decide who should be responsible should something go wrong with a particular product: the user or the developer?” Is No 10 waking up to dangers of artificial intelligence? (The Guardian)
“This ‘San Francisco Project’ — named for the industrial epicenter of AI — would have the urgent and existential mandate of the Manhattan Project but, rather than building a weapon, it would bring the brightest minds of our generation to solve the technical problem of building safe AI. The way we build AI today is more like growing a living thing than assembling a conventional weapon, and frankly, the mathematical reality of machine learning is that none of us have any idea how to align an AI with social values and guarantee its safety. We desperately need to solve these technical problems before AGI is created.” Why we need a "Manhattan Project" for A.I. safety (Salon)
“The protestors pointed out that it was particularly crazy that Altman himself has warned that the downside risk from AGI could mean ‘lights out for all of us,’ and yet he continues pursuing more and more advanced A.I. Similar protestors have picketed outside the London headquarters of Google DeepMind in the past week. I am not sure who is right here. But I think that if there’s a nonzero chance of human extinction or other severely negative outcomes from advanced A.I., it is worthwhile having at least a few smart people thinking about how to prevent that from happening.” Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing (Fortune)
“Current climate policies will leave more than a fifth of humanity exposed to dangerously hot temperatures by 2100, unprecedented new research suggests. The paper, published on Monday and co-authored by academics from around the world, examines the ‘human climate niche’ – the temperature range in which humans have lived and flourished throughout history – and how warming could see billions of people falling outside of it…The research found that under the worst-case scenarios of 3.6C or even 4.4C global warming, half of the world’s population could be left outside the climate niche, posing an ‘existential risk’.” Current climate policy to ‘leave two billion exposed to dangerous heat by 2100’ (Yahoo! News)
“NASA has built an artificial intelligence model to predict where on Earth an impending solar storm would strike, a new system that scientists said can provide ‘30 minutes of advance warning’. The AI model analyses NASA satellite data to raise the alarm on dangerous space weather, said researchers from the American space agency’s Goddard Space Center. The warning may provide just enough time for countries to prevent severe impacts of these storms on power grids and other critical infrastructure, according to the new study published recently in the journal Space Weather.” NASA’s new AI gives ‘30 minutes of advance warning’ before killer solar superstorms strike Earth (Independent)
Latest policy-relevant research
Establishing institutions for guarding future generations
A small group of countries have experimented with institutions that are intended to act as guardians for future generations, but these mechanisms are not a silver bullet for dealing with harmful short-termism in contemporary democratic politics, according to a new study in Future. These institutions aren’t reliably achieving their goals. They tend to fail to deliver due to a lack of transparency, mis-calibrated levels of power (whether too much or too little), poor funding and perceived level of neutrality. Their greatest potential to provide value to future generations by reducing existential risk remains largely unused. (23 May 2023)
Policy comment: Institutions for future generations are an imperfect policy option for reducing global catastrophic risk. The scope of future generations mechanisms often prioritise other long-term domestic policy issues, such as infrastructure, healthcare and education. If they are being established with the purpose of focusing on global catastrophic risk, their scope should explicitly state so. The main challenge for these mechanisms is how they integrate into the policy-making and decision-making processes without being politicised or usurped. Another option for structuring such an institution is to establish it as a forum for legislators of different political parties to engage with long-term issues. Although it would not directly contribute to policy-making, it could help encourage norms around commitments to future generations, convene legislators around emerging or neglected issues, and foster bipartisan support for initiatives that reduce global catastrophic risk.