2020. Ripples on the Great Sea of Life: A Brief History of Existential Risk Studies. SSRN. Co-authored with Dr. Simon Beard. HERE.
This paper explores the history of Existential Risk Studies (ERS). While concerns about human extinction can be traced back to the 19th century, the field only emerged in the last two decades with the formal conceptualization of existential risk. Since then, there have been three distinct "waves" or research paradigms: the first built on an explicitly transhumanist and techno-utopian worldview; the second growing out of an ethical view known as "longtermism" that is closely associated with the Effective Altruism movement; and the third emerging from the interface between ERS and other fields that have engaged with existential risk, such as Disaster Studies, Environmental Science and Public Policy. In sketching the evolution of these paradigms, together with their historical antecedents, we offer a critical examination of each and speculate about where the field may be heading in the future.
2020. Assessing Climate Change’s Contribution to Global Catastrophic Risk. Futures. Co-authored with Simon Beard, Lauren Holt, Shahar Avin, Asaf Tzachor, Luke Kemp, and Haydn Belfield. (forthcoming)
Many have claimed that climate change is an imminent threat to humanity, but there is no way to verify such claims. This is concerning, especially given the prominence of some of these claims and the fact that they are confused with other well verified and settled aspects of climate science. This paper seeks to build an analytical framework to help explore climate change’s contribution to Global Catastrophic Risk (GCR), including the role of its indirect and systemic impacts.
2020. Identifying and Assessing the Drivers of Global Catastrophic Risk: A Review and Proposal for the Global Challenges Foundation. Global Challenges Foundation. (Co-authored with Simon Beard.) HERE.
The goal of this report is to review and assess methods and approaches for assessing the drivers of global catastrophic risk. The review contains five sections: (1) A conceptual overview setting out our understanding of the concept of global catastrophic risks (GCRs), their drivers, and how they can be assessed. (2) A brief historical overview of the field of GCR research, indicating how our understanding of the drivers of GCRs has developed. (3) A summary of existing studies that seek to quantify the drivers of GCR by assessing the likelihood that different causes will precipitate a global catastrophe. (4) A critical evaluation of the usefulness of this research given the conceptual framework outlined in section 1 and a review of emerging conceptual, evaluative and risk assessment tools that may allow for better assessments of the drivers of GCRs in the future. (5) A proposal for how the Global Challenges Foundation could work to most productively improve our understanding of the drivers of GCRs given the findings of sections 2, 3, and 4.
2020. Can Anti-Natalists Oppose Human Extinction? South African Journal of Philosophy. HERE.
Argues that there is no contradiction in believing that procreation is morally wrong and valuing (highly) the long-term survival of humanity. The link between these two ideas is the possibility of life-extension technologies that could enable a "final generation" to live indefinitely long. The paper then examines a number of interesting implications of "no-extinction anti-natalism" that concern personal identity, mind-uploading, and becoming posthuman.
2019. Existential Risks: A Philosophical Analysis. Inquiry: An Interdisciplinary Journal of Philosophy. HERE.
Critically examines five definitions of "existential risk" in the scholarly literature. The paper ultimately argues for a pluralistic approach that prescribes the use of different definitions depending on the particular context of use. For example, "existential risks as a significant loss of expected value" is, I argue, the best technical definition, while "existential risks as human extinction or civilizational collapse" is, I suggest, best-suited for lower-level discussions about the long-term future of humanity with the general public.
2019. The Possibility and Risks of Artificial General Intelligence. Bulletin of the Atomic Scientists. HERE.
This paper offers and overview of the risks posed by artificial general intelligence, and especially seed AIs that could rapidly augment their problem-solving capabilities through recursive self-improvement. It also provides some reasons for worrying about "AI denialism," or the irrational dismissal of the potential existential hazards that superintelligent machines could pose.
2019. The Future of War: Could Lethal Autonomous Weapons Make Conflict More Ethical? AI & Society. Co-authored with Steven Umbrello (lead author) and Angelo F. De Bellis. HERE.
This paper assesses the current arguments for and against the use of lethal autonomous weapons (LAWs). Specific interest is given to "ethical LAWs," which are artificially intelligent weapons systems that make decisions within the bounds of an ethics-based code. It concludes that insofar as genuinely ethical LAWs are possible, they should replace humans on the battlefield.
2018. Facing Disaster: The Great Challenges Framework. Foresight. HERE.
Offers a novel conceptual mechanism for prioritizing global-scale threats to human prosperity and survival. The paper then provides a wide-ranging and highly detailed survey of risks associated with climate change, biodiversity loss, emerging technologies, and machine superintelligence, as well as a number of important subsidiary issues that are currently under-explored in the technical literature.
2018. Long-Term Trajectories of Human Civilization. Foresight. Co-authored with Seth Baum (lead author), Stuart Armstrong, Timoteus Dahlberg, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs Maas, James Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Alexei Turchin, and Roman Yampolskiy. HERE. This paper won Foresight's 2020 Literati Award for Outstanding Paper.
This paper examines four possible future trajectories of civilization, namely, status quo, catastrophe, technological transformation, and astronomical trajectories. It argues that status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.
2018. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History. In Artificial Intelligence Safety and Security (ed. Roman Yampolskiy). HERE.
Argues that dual-use emerging technologies could fatally undercut the social contract upon which states are founded, and that the only apparent way to rectify this situation is to establish a “post-singularity social contract” between humanity and a superintelligent singleton.
2018. Space Colonization and Suffering Risks: Reassessing the Maxipok Rule. Futures. HERE.
Drawing from evolutionary biology, transhumanist studies, international relations theory, and cosmology, this paper outlines an argument for believing that a colonized universe would be replete with constant, catastrophic conflicts. In other words, colonizing space is very likely a good way to instantiate a suffering risk, or s-risk, by causing astronomical amounts of misery.
2018. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Technical Report 2, Version 1.2. HERE.
Offers a point-by-point critique of one part of Pinker’s chapter on “existential threats” in Enlightenment Now. I discover multiple mined quotes that Pinker uses to suggest more or less the opposite of what the original authors intended; cherry-picked data from Pinker’s own sources; misleading statements about views that Pinker disagrees with; some outright false assertions; and a general disregard for the most serious scholarship on existential risks and related matters. I claim that it would be unfortunate if Enlightenment Now were to shape future discussions about our existential predicament.
2018. Who Would Destroy the World? Omnicidal Agents and Related Phenomena. Aggression and Violent Behavior. HERE.
Provides a careful look at numerous actual agents throughout history who would almost certainly have brought about an existential catastrophe if only the technological means had been available to them. The aim is to show that there really are people in the world who would happily push a “doomsday button” if one were within finger’s reach. This paper is intended to be part 2 of the more theoretical paper, “Agential Risks and Information Hazards.”
2018. Agential Risks and Information Hazards: An Unavoidable but Dangerous Topic? Futures. HERE.
Outlines a six-part typology of human and non-human agents who, if given the opportunity, would almost certainly bring about an existential catastrophe. The paper then examines a number of reasons that this topic is important (although neglected), and why the study of “agential risks” deserves to be its own subfield of existential risk studies.
2017. Moral Bioenhancement and Agential Risks: Good and Bad Outcomes. Bioethics. HERE.
Argues that, when examined through the framework of agential risks, the moral bioenhancement program that Persson and Savulescu advocate could backfire catastrophically by actually exacerbating certain types of omnicidal agents.
2016. Agential Risks: A Comprehensive Introduction. Journal of Evolution and Technology. HERE.
This is the first paper to examine the issue of agential risks. It serves as the foundation for much more detailed and sophisticated treatments in Morality, Foresight, and Human Flourishing, “Agential Risks and Information Hazards,” and “Who Would Destroy the World?”
2011. Emerging Technologies and the Future of Philosophy. Metaphilosophy. HERE.
Argues that humanity is confronting two distinct cognitive problems: first, the complexity of the world is simply too great for any one person to navigate it competently, and second, fields like quantum mechanics and philosophy may be focused on puzzles with respect to which we are cognitively closed. The paper then examines whether cognitive enhancement technologies could resolve these issues.
2011. Technology and our epistemic situation: what ought out priorities to be? Foresight. HERE. [See link for abstract.]
2010. Risk Mysterianism and Cognitive Boosters. Journal of Future Studies. HERE. [See link for abstract.]
2010. Review of Minds and Computers. Techne: Research in Philosophy and Technology. HERE. [See link for abstract.]
2009. Transhumanism, Progress, and the Future. Journal of Evolution and Technology. HERE. [See link for abstract.]
2009. A Modified Conception of Mechanisms. Erkenntnis. HERE. [See link for abstract.]
* * * *
Why You Should Care About the Long Term but Not Be a Longtermist. (In progress.)
International Criminal Law and the Future of Humanity: A Theory of the Crime of Omnicide. HERE.
This paper argues that current international criminal law should be expanded to include omnicide, or the intentional destruction of humanity. I claim that omnicide is not a special case of crimes against humanity or genocide, but is distinct from both in a number of important ways. I further argue that establishing a specialized convention on omnicide is urgent given the exponential development of dual-use emerging technologies, which could enable a large number of state and nonstate actors to unilaterally bring about the extinction of humanity. Although I do not intend to outline a complete theory of the crime of omnicide, I do attempt to lay a foundation for future research on this important topic.
Is Existential Risk a Useless Category? Could the Concept Be Dangerous? Under review. HERE.
This paper offers a number of reasons for why the Bostromian notion of existential risk is useless. On the one hand, it is predicated on a highly idiosyncratic techno-utopian vision of the future that few would find appealing. On the other, the “worst-case outcomes” for humanity group together the atrocious to the benign. What matters, on Bostrom’s view, is not human extinction per se, but any event that would permanently prevent current or future people from attaining technological Utopia. I then consider the question of whether the Bostromian paradigm could be dangerous. My answer is affirmative: this perspective combines utopianism and utilitarianism. Historically, this has proven to be a highly combustible mix. When the ends justify the means, and when the end is paradise, then groups or individuals may feel justified in contravening any number of moral constraints on human behavior, including those that proscribe violent actions. Although I believe that studying low-probability, high-impact risks is extremely important, I urge scholars to abandon the Bostromian concept of existential risk.
Existential Virtue Ethics: A Novel Approach To Thinking About Human Extinction. Under review.
This paper offers a novel approach to thinking about human extinction that I call “existential virtue ethics,” on the model of “ecological (or environmental) virtue ethics.” It employs the extensionist and exemplarist strategies, in particular, to construct a list of candidate “existential virtues,” or character traits that are virtuous within the specific domain of “ensuring human survival.” Although scientists have fretted about a naturalistic end to humanity since the second law of thermodynamics was discovered in the 1860s, the topic has received almost no attention from philosophers—especially virtue ethicists—to date. The present paper aims to rectify this situation, if only by stimulating further philosophical exploration.
Transhumanism, Effective Altruism, and Systems Theory: A History and Analysis of Existential Risk Studies. Under review. HERE.
Agential Risks and the Apocalyptic Residual. Revise and resubmit. HERE.
This brief paper argues that Nick Bostrom’s term “the apocalyptic residual” will, if widely adopted by scholars, foment confusion about one of the most important issues within the budding field of existential risk studies: agential risks. I explain why this term is problematic, and argue that the word “apocalyptic” should be reserved for, and only for, agents who are explicitly motivated by religious ideologies.
On Being Afraid of Monsters: Assessing the Greatest Future Risks to Human Survival and Prosperity. Revise and resubmit. World Futures.
Humanity currently faces a number of existential risks that were unknown to people just a few decades ago. Thus, might our descendants face even more existential risks in the future that we are currently aware of? The present paper examines the particular issue of unknown threats to human survival and prosperity, a phenomenon that I call “monsters.” It offers a comprehensive account of the varieties of the unknown, followed by a historical argument for why monsters could pose (by far) the greatest threat to our lineage in the coming years, decades, and centuries.
“The Greatest of Conceivable Crimes”: Convergent Arguments for Prioritizing the Survival of Humanity. Under review. Co-authored with Dr. Simon Beard.
This paper synthesizes a wide range of distinct ideas that all converge upon the evaluative proposition that human extinction, if it were to occur, would constitute an immense tragedy. It examines and analyzes three general classes of arguments for why human extinction would be very bad, if not the worst possible outcome imaginable: “Anti-extinction views,” “further loss views,” “meaning of life arguments,” and “irreversibility arguments.”
Scholarly (non-peer reviewed) Papers:
Crimes Without a Name: On Global Governance and Existential Risks. Forthcoming. HERE.
November 2018. The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis." LessWrong. HERE.
October 2017. Why Superintelligence Is a Threat that Should be Taken Seriously. Bulletin of the Atomic Scientists. HERE.
September 2017. Cosmic Rays, Gravitational Anomalies, and the Simulation Hypothesis. HERE.
August 2017. Omnicidal Agents and Related Phenomena. Working draft (long version) HERE.
August 2017. How Religious and Non-Religious People View the Apocalypse. Bulletin of the Atomic Scientists. HERE.
December 2016. Who Would Destroy the World? Bulletin of the Atomic Scientists. HERE.
September 2016. The Clash of Eschatologies: The Role of End-Times Thinking in World History. Skeptic. HERE.
September 2016. It Matters Which Trend Lines One Follows: Why Terrorism Is an Existential Threat. Free Inquiry. (Also, There's No Time to Wait. Both are responses to Michael Shermer). HERE.
September 2016. How likely is an existential catastrophe? Bulletin of the Atomic Scientists. HERE.
August 2016. Being Alarmed Is Not the Same as Being an Alarmist. Future of Life Institute. HERE.
July 2016. Agential Risks: A New Direction for Existential Risk Scholarship. Technical Report. HERE.
July 2016. Climate Change Is the Most Urgent Existential Risk. Future of Life Institute. HERE.
June 2016. Apocalypse Soon? How Emerging Technologies, Population Growth, and Global Warming Will Fuel Apocalyptic Terrorism in the Future. Skeptic. HERE.
June 2016. Existential Risks Are More Likely to Kill You Than Terrorism. Future of Life Institute. HERE.
June 2016. The Collective Intelligence of Women Could Save the World. Future of Life Institute. HERE.
May 2016. Three Minutes Before Midnight: An Interview with Lawrence Krauss About the Future of Humanity. Free Inquiry, HERE.
April 2016. Biodiversity loss: An existential risk comparable to climate change. Bulletin of the Atomic Scientists. HERE.
Reposted by the Future of Life Institute, HERE.
2018. Phil Torres’s Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. By Steven Umbrello. Futures. HERE.