GCR Policy Newsletter (29 November 2022)
Transformative technologies, trans-disciplinary aspects of AI and delaying engineered pandemics
This twice-monthly newsletter highlights the latest research and news on global catastrophic risk. It looks at policy efforts around the world to reduce the risk and policy-relevant research from the field of studies.
GCR in the media
“Climate change is just part of the global environmental emergency. Biological diversity is also imperiled. Human activity is driving unprecedented declines in ecosystems and species, threatening the health and integrity of the biosphere and the innumerable benefits that we obtain from the natural world. Unfortunately, existing national policies and multilateral institutions have proven totally inadequate to address this potentially existential risk. Restoring balance between humanity and nature requires a paradigm shift toward “planetary politics,” accompanied by dramatic innovations in global environmental governance.” To Prevent the Collapse of Biodiversity, the World Needs a New Planetary Politics (Carnegie Endowment for International Peace)
“The difference is that these tools, as destructive as they can be, are largely within our control. If they cause catastrophe, it will be because we deliberately chose to use them, or failed to prevent their misuse by malign or careless human beings. But AI is dangerous precisely because the day could come when it is no longer in our control at all…While divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly. In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be ‘extremely bad (e.g., human extinction)’.” AI experts are increasingly afraid of what they’re creating (Vox)
“Critics of longtermism say that the outlook almost exclusively concerns what Karin Kuhlemann, a lawyer and population ethicist at University College, London, labels ‘sexy global catastrophic risks’, such as asteroids, nuclear disaster and malicious AIs. Effective altruists are less bothered by ‘unsexy’ risks like climate change, topsoil degradation and erosion, loss of biodiversity, overfishing, freshwater scarcity, mass un- or under-employment and economic instability. These problems have no obvious culprit and require collective action.” The good delusion: has effective altruism broken bad? (The Economist)
“Rachel Currie wants people to think more about the future. Specifically, the bad future: in the producer/director’s new series, Brave New Zealand World, a thousand visions of apocalypse float across the screen. Ultra-deadly pandemics spread across the globe. Some of the estimated 13,000 nuclear warheads dotted around the world spin to point at enemy countries. Rising sea levels cause mass flooding and a new tide of displaced refugees. Humans develop generalised artificial intelligence which has the capacity to turn on its creators. But Currie thinks it’s important to talk about these existential threats, the dangers which could wipe out humanity, because they’re caused by humans – and they’re also fixable by humans. ‘None of these problems are insurmountable. These are threats we have created, and we can do something about, even if we choose not to.’” First, you have to accept the threats of the future are real (The ?Spinoff)
“According to Dr Cave, who also oversees the CSER (Centre for the study of Existential Risk), population growth is not the only threat to civilisation that should be taken into account. ‘At the Centre for the study of Existential Risk, we study the possibility of existential, civilization-scale catastrophes that could end the human race and try to mitigate them and it is something I take very seriously. Every day that civilisation doesn't collapse is a success for me and the team. But it is worth taking seriously, there are a lot of threats. New ones like AI, but also old ones like nuclear weapons have not gone away. Preserving civilisation is work.’” Elon Musk’s humanity dream shattered as expert says we can't live on Mars (Express.co.uk)
Latest policy-relevant research
Managing risks from transformative technologies
The development of the atomic bomb serves as a roadmap to understanding the impact of future transformative technologies, according to a new research paper by Toby Ord. There is much we don’t know about the future development of these technologies, making it more difficult to help make sure the development is safe and beneficial for humanity. The history of the atomic bomb provides many insights that may be useful as scientists and engineers today strive to develop new transformative technologies, such as artificial intelligence, synthetic biology or nanotechnology. (14 November 2022)
Policy comment: Policy efforts to reduce the global catastrophic risk of transformative technologies will need to change if the research and development is led from inside or outside the government. Where governments are driving the development of transformative technologies, such as in the case of the atomic bomb, the risk emerges from secrecy, race dynamics, group think and other poor epistemic conditions for considering risk. But many transformative technologies, such as AI and synthetic biology, are driven by industry and academia. The policy challenge is enabling safe development and positive applications of the technology while reducing or punishing risky behavior. The same technology will require different policy responses depending on actor leading its development. If government-led, increasing transparency and verification between countries might reduce fear and miscalculation. If industry-led, incentives and regulation might help tip the balance away from risk towards safety. Better science-policy links could also help inform more tailored and targeted policy measures.
Finding cross-cutting solutions to AI risk
As long as AI systems inhabit the small world of neoclassical Bayesian utility maximizing agents, they are restricted and pose little threat of misalignment or hard take-off that will result in a Singularity or existential risk, according to a discussion paper for the IZA Institute of Labour Economics. Future avenues for research include the need for further elaborations of economic growth models to explore the possibility of an AI-induced growth collapse, to explore the physical limits of growth, and to sharpen the tools to draw out the policy implications of facing fat-tailed catastrophic risks. Economists should contribute more to existential risk studies. (22 November 2022)
Policy comment: Supporting research at the intersection of AI risk and economics might help identify important economic drivers and solutions, such as promoting cooperation and coordination between competing AI labs and firms that use AI. And better economic models of AI-induced growth might be needed to understand and avoid certain collapse scenarios. Funding economic research on AI development and progress could also improve institutional decision making for long-term decisions regarding humanity’s future. Governments should include economists in policy-making processes around AI beyond simply assessments of job creation and loss. Economists could provide useful insights on industry policy, innovation policy, and liability and insurance.
Human Factors and Ergonomics (HFE) management makes critical systems safer and could play a major role in the design of safe and ethical AGI, according to a conference paper by researchers from Australia and the UK. HFE is the application of psychological and physiological principles to the engineering and design of products, processes, and systems. Using HFE methods, the most critical risks identified were not due to poor AI performance, but rather when the AI attempts to achieve goals at the expense of other system values, or when the AI becomes ‘super-intelligent’, and humans can no longer manage it. (1 November 2022)
Policy comment: AI safety requires a trans-disciplinary approach - using experts from across computer science, economics, psychology, philosophy, HFE, systems engineering, safety science and risk management. Governments must bring together experts from across the academic, private and public sectors to develop AI safety and controls. Existential and global catastrophic risk research institutes could set a strong example for government: by establishing a ‘task-force’ that seeks to develop tailored, specific and robust policy recommendations from a range of disciplines based on clearly defined use-cases.
Delaying the risk of engineered pandemics
The deliberate and simultaneous release of many pandemic viruses across travel hubs could threaten the stability of civilisation, according to a Geneva Papers Research Series paper by Kevin Esvelt. Current trends suggest that, within a decade, tens of thousands of skilled individuals will be able to access the information required for them to single-handedly cause new pandemics. Safeguarding civilisation from the catastrophic misuse of biotechnology requires delaying the development and misuse of pandemic-class agents while building systems capable of reliably detecting threats and preventing nearly all infections. (November 2022)
Policy comment: The paper’s ‘delay-detect-defend’ framework provides a practical strategy for governments to deal with the risks from biotechnology. The delay component, in particular, is important given the pace of technological advancement and the lack of safegaurds. But it would require innovative governance approaches that many countries and international institutions are currently ill-equipped to develop. For example, methods to delay the risk could include penalties for researchers, institutions and journals that are responsible for research that leads to catastrophic misuse. And strong and independent oversight over life science research would create normative and security barriers to conducting risky researcher.
Integrating GCR into the UN system
The crossing of planetary boundaries and other global catastrophic risk events could have significant and adverse effects on the global development gains, capability building, resilience, and adaptability that has been achieved as a result of decades of international development work, according to an academic article by a CSER research affiliate. Using a scenario analysis, with planetary boundaries as the x-axis and GCR events as the y-axis, four main scenarios emerge: Stable Earth, Earth under Uncertainty, Earth under threat, and Global Collapse. There is an inclination for humanity to veer in the direction of the global collapse future. If it were to occur, the implementation and success of the Sustainable Development Goals (SDGs) would not be likely. Preventative action is essential, which can be achieved by creating a Planetary Boundaries goal in the follow-on goals from the SDGs. (17 November 2022)
Policy comment: Further work is needed to understand the connections and linkages between global catastrophic risk and the Sendai Framework for Disaster Risk Reduction, the Sustainable Development Goals and planetary boundaries. And it is unclear why the global collapse scenario is most likely or what it would entail. Despite the paper’s efforts, important policy questions require more thorough investigation: the relative importance of risk prevention over risk preparedness; identification and analysis of plausible collapse futures; implications for global catastrophic risk on the SDGs; how to incorporate global catastrophic risk into the Sendai Framework. Researchers must provide actionable and empirically based policy implications and recommendations for the UN system. An important starting point might be for researchers and UN staff to co-create a policy research agenda for the intersection of GCR with the UN’s priorities and bodies.