top of page

Academic Papers:

(Incomplete list below; my CV can be found here.)

2023. Maniacs, Misanthropes, and Omnicidal Terrorists: Reassessing the Agential Risk FrameworkIntersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks ConferenceHERE.

This paper offers a novel typology of "agential risks," i.e., the risks posed by agents who would or might push a “doomsday button” if one were within finger’s reach.

2020. Ripples on the Great Sea of Life: A Brief History of Existential Risk Studies. SSRN. Co-authored with Dr. Simon Beard. HERE.

This paper explores the history of Existential Risk Studies (ERS). While concerns about human extinction can be traced back to the 19th century, the field only emerged in the last two decades with the formal conceptualization of existential risk. Since then, there have been three distinct "waves" or research paradigms: the first built on an explicitly transhumanist and techno-utopian worldview; the second growing out of an ethical view known as "longtermism" that is closely associated with the Effective Altruism movement; and the third emerging from the interface between ERS and other fields that have engaged with existential risk, such as Disaster Studies, Environmental Science and Public Policy. In sketching the evolution of these paradigms, together with their historical antecedents, we offer a critical examination of each and speculate about where the field may be heading in the future.

2020. Assessing Climate Change’s Contribution to Global Catastrophic Risk. Futures. Co-authored with Simon Beard, Lauren Holt, Shahar Avin, Asaf Tzachor, Luke Kemp, and Haydn Belfield.

Many have claimed that climate change is an imminent threat to humanity, but there is no way to verify such claims. This is concerning, especially given the prominence of some of these claims and the fact that they are confused with other well verified and settled aspects of climate science. This paper seeks to build an analytical framework to help explore climate change’s contribution to Global Catastrophic Risk (GCR), including the role of its indirect and systemic impacts.

2020. Identifying and Assessing the Drivers of Global Catastrophic Risk: A Review and Proposal for the Global Challenges Foundation. Global Challenges Foundation. (Co-authored with Simon Beard.) HERE.

 

The goal of this report is to review and assess methods and approaches for assessing the drivers of global catastrophic risk. The review contains five sections:  (1) A conceptual overview setting out our understanding of the concept of ​global catastrophic risks (GCRs), their drivers, and how they can be assessed. (2) A brief historical overview of the field of GCR research, indicating how our understanding of the drivers of GCRs has developed. (3) A summary of existing studies that seek to quantify the drivers of GCR by assessing the likelihood that different causes will precipitate a global catastrophe. (4) A critical evaluation of the usefulness of this research given the conceptual framework outlined in section 1 and a review of emerging conceptual, evaluative and risk assessment tools that may allow for better assessments of the drivers of GCRs in the future. (5) A proposal for how the Global Challenges Foundation could work to most productively improve our understanding of the drivers of GCRs given the findings of sections 2, 3, and 4.

2020. Can Anti-Natalists Oppose Human Extinction? South African Journal of Philosophy. HERE.

 

Argues that there is no contradiction in believing that procreation is morally wrong and valuing (highly) the long-term survival of humanity. The link between these two ideas is the possibility of life-extension technologies that could enable a "final generation" to live indefinitely long. The paper then examines a number of interesting implications of "no-extinction anti-natalism" that concern personal identity, mind-uploading, and becoming posthuman.

2019. Existential Risks: A Philosophical AnalysisInquiry: An Interdisciplinary Journal of PhilosophyHERE.

 

Critically examines five definitions of "existential risk" in the scholarly literature. The paper ultimately argues for a pluralistic approach that prescribes the use of different definitions depending on the particular context of use. For example, "existential risks as a significant loss of expected value" is, I argue, the best technical definition, while "existential risks as human extinction or civilizational collapse" is, I suggest, best-suited for lower-level discussions about the long-term future of humanity with the general public.

2019. The Possibility and Risks of Artificial General IntelligenceBulletin of the Atomic ScientistsHERE.

 

This paper offers and overview of the risks posed by artificial general intelligence, and especially seed AIs that could rapidly augment their problem-solving capabilities through recursive self-improvement. It also provides some reasons for worrying about "AI denialism," or the irrational dismissal of the potential existential hazards that superintelligent machines could pose.

2018. Facing Disaster: The Great Challenges FrameworkForesightHERE.

 

Offers a novel conceptual mechanism for prioritizing global-scale threats to human prosperity and survival. The paper then provides a wide-ranging and highly detailed survey of risks associated with climate change, biodiversity loss, emerging technologies, and machine superintelligence, as well as a number of important subsidiary issues that are currently under-explored in the technical literature.

2018. Long-Term Trajectories of Human CivilizationForesight. Co-authored with Seth Baum (lead author), Stuart Armstrong, Timoteus Dahlberg, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs Maas, James Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Alexei Turchin, and Roman Yampolskiy. HERE. This paper won Foresight's 2020 Literati Award for Outstanding Paper.

 

This paper examines four possible future trajectories of civilization, namely, status quo, catastrophe, technological transformation, and astronomical trajectories. It argues that status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.

2018. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History. In Artificial Intelligence Safety and Security (ed. Roman Yampolskiy). HERE.

 

Argues that dual-use emerging technologies could fatally undercut the social contract upon which states are founded, and that the only apparent way to rectify this situation is to establish a “post-singularity social contract” between humanity and a superintelligent singleton.

2018. Space Colonization and Suffering Risks: Reassessing the Maxipok RuleFuturesHERE.

 

Drawing from evolutionary biology, transhumanist studies, international relations theory, and cosmology, this paper outlines an argument for believing that a colonized universe would be replete with constant, catastrophic conflicts. In other words, colonizing space is very likely a good way to instantiate a suffering risk, or s-risk, by causing astronomical amounts of misery.

2018. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Technical Report 2, Version 1.2. HERE.

 

Offers a point-by-point critique of one part of Pinker’s chapter on “existential threats” in Enlightenment Now. I discover multiple mined quotes that Pinker uses to suggest more or less the opposite of what the original authors intended; cherry-picked data from Pinker’s own sources; misleading statements about views that Pinker disagrees with; some outright false assertions; and a general disregard for the most serious scholarship on existential risks and related matters. I claim that it would be unfortunate if Enlightenment Now were to shape future discussions about our existential predicament.

2018. Who Would Destroy the World? Omnicidal Agents and Related PhenomenaAggression and Violent BehaviorHERE.

 

Provides a careful look at numerous actual agents throughout history who would almost certainly have brought about an existential catastrophe if only the technological means had been available to them. The aim is to show that there really are people in the world who would happily push a “doomsday button” if one were within finger’s reach. This paper is intended to be part 2 of the more theoretical paper, “Agential Risks and Information Hazards.”

2018. Agential Risks and Information Hazards: An Unavoidable but Dangerous Topic? FuturesHERE.

 

Outlines a six-part typology of human and non-human agents who, if given the opportunity, would almost certainly bring about an existential catastrophe. The paper then examines a number of reasons that this topic is important (although neglected), and why the study of “agential risks” deserves to be its own subfield of existential risk studies.

2009. Transhumanism, Progress, and the Future. Journal of Evolution and TechnologyHERE. [See link for abstract.]

2009. A Modified Conception of Mechanisms. ErkenntnisHERE. [See link for abstract.]

* * * *

bottom of page