Academic Papers:

Is Existential Risk a Useless Category? Could the Concept Be Dangerous? Under review. HERE.

This paper offers a number of reasons for why the Bostromian notion of existential risk is useless. On the one hand, it is predicated on a highly idiosyncratic techno-utopian vision of the future that few would find appealing. On the other, the “worst-case outcomes” for humanity group together the atrocious to the benign. What matters, on Bostrom’s view, is not human extinction per se, but any event that would permanently prevent current or future people from attaining technological Utopia. I then consider the question of whether the Bostromian paradigm could be dangerous. My answer is affirmative: this perspective combines utopianism and utilitarianism. Historically, this has proven to be a highly combustible mix. When the ends justify the means, and when the end is paradise, then groups or individuals may feel justified in contravening any number of moral constraints on human behavior, including those that proscribe violent actions. Although I believe that studying low-probability, high-impact risks is extremely important, I urge scholars to abandon the Bostromian concept of existential risk.

Existential Virtue Ethics: A Novel Approach To Thinking About Human Extinction. Under review.

This paper offers a novel approach to thinking about human extinction that I call “existential virtue ethics,” on the model of “ecological (or environmental) virtue ethics.” It employs the extensionist and exemplarist strategies, in particular, to construct a list of candidate “existential virtues,” or character traits that are virtuous within the specific domain of “ensuring human survival.” Although scientists have fretted about a naturalistic end to humanity since the second law of thermodynamics was discovered in the 1860s, the topic has received almost no attention from philosophers—especially virtue ethicists—to date. The present paper aims to rectify this situation, if only by stimulating further philosophical exploration.

Transhumanism, Effective Altruism, and Systems Theory: A History and Analysis of Existential Risk StudiesUnder review. HERE.

Global Challenges Foundation (GCF) Report on the Drivers of Global Catastrophic Risks. Co-authoring with Simon Beard. Forthcoming.

Agential Risks and the Apocalyptic ResidualRevise and resubmit. HERE.

This brief paper argues that Nick Bostrom’s term “the apocalyptic residual” will, if widely adopted by scholars, foment confusion about one of the most important issues within the budding field of existential risk studies: agential risks. I explain why this term is problematic, and argue that the word “apocalyptic” should be reserved for, and only for, agents who are explicitly motivated by religious ideologies.

On Being Afraid of Monsters: Assessing the Greatest Future Risks to Human Survival and Prosperity. Revise and resubmitWorld Futures.

Humanity currently faces a number of existential risks that were unknown to people just a few decades ago. Thus, might our descendants face even more existential risks in the future that we are currently aware of? The present paper examines the particular issue of unknown threats to human survival and prosperity, a phenomenon that I call “monsters.” It offers a comprehensive account of the varieties of the unknown, followed by a historical argument for why monsters could pose (by far) the greatest threat to our lineage in the coming years, decades, and centuries.

“The Greatest of Conceivable Crimes”: Convergent Arguments for Prioritizing the Survival of Humanity. Under review. Co-authored with Dr. Simon Beard.

This paper synthesizes a wide range of distinct ideas that all converge upon the evaluative proposition that human extinction, if it were to occur, would constitute an immense tragedy. It examines and analyzes three general classes of arguments for why human extinction would be very bad, if not the worst possible outcome imaginable: “Anti-extinction views,” “further loss views,” “meaning of life arguments,” and “irreversibility arguments.”

* * * *

(19) 2020 (forthcoming) Can Anti-Natalists Oppose Human Extinction? South African Journal of Philosophy. HERE.

Argues that there is no contradiction in believing that procreation is morally wrong and valuing (highly) the long-term survival of humanity. The link between these two ideas is the possibility of life-extension technologies that could enable a "final generation" to live indefinitely long. The paper then examines a number of interesting implications of "no-extinction anti-natalism" that concern personal identity, mind-uploading, and becoming posthuman.

(18) 2019 Existential Risks: A Philosophical AnalysisInquiry: An Interdisciplinary Journal of PhilosophyHERE.

Critically examines five definitions of "existential risk" in the scholarly literature. The paper ultimately argues for a pluralistic approach that prescribes the use of different definitions depending on the particular context of use. For example, "existential risks as a significant loss of expected value" is, I argue, the best technical definition, while "existential risks as human extinction or civilizational collapse" is, I suggest, best-suited for lower-level discussions about the long-term future of humanity with the general public.

(17) 2019. The Possibility and Risks of Artificial General IntelligenceBulletin of the Atomic ScientistsHERE.

This paper offers and overview of the risks posed by artificial general intelligence, and especially seed AIs that could rapidly augment their problem-solving capabilities through recursive self-improvement. It also provides some reasons for worrying about "AI denialism," or the irrational dismissal of the potential existential hazards that superintelligent machines could pose.

(16) 2019. The Future of War: Could Lethal Autonomous Weapons Make Conflict More Ethical? AI & Society. Co-authored with Steven Umbrello (lead author) and Angelo F. De Bellis. HERE.

This paper assesses the current arguments for and against the use of lethal autonomous weapons (LAWs). Specific interest is given to "ethical LAWs," which are artificially intelligent weapons systems that make decisions within the bounds of an ethics-based code. It concludes that insofar as genuinely ethical LAWs are possible, they should replace humans on the battlefield.

(15) 2018. Facing Disaster: The Great Challenges FrameworkForesightHERE.

Offers a novel conceptual mechanism for prioritizing global-scale threats to human prosperity and survival. The paper then provides a wide-ranging and highly detailed survey of risks associated with climate change, biodiversity loss, emerging technologies, and machine superintelligence, as well as a number of important subsidiary issues that are currently under-explored in the technical literature.

(14) 2018. Long-Term Trajectories of Human CivilizationForesight. Co-authored with Seth Baum (lead author), Stuart Armstrong, Timoteus Dahlberg, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs Maas, James Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Alexei Turchin, and Roman Yampolskiy. HERE.

This paper examines four possible future trajectories of civilization, namely, status quo, catastrophe, technological transformation, and astronomical trajectories. It argues that status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.

(13) 2018. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History. In Artificial Intelligence Safety and Security (ed. Roman Yampolskiy). HERE.

Argues that dual-use emerging technologies could fatally undercut the social contract upon which states are founded, and that the only apparent way to rectify this situation is to establish a “post-singularity social contract” between humanity and a superintelligent singleton.

(12) 2018. Space Colonization and Suffering Risks: Reassessing the Maxipok RuleFuturesHERE.

Drawing from evolutionary biology, transhumanist studies, international relations theory, and cosmology, this paper outlines an argument for believing that a colonized universe would be replete with constant, catastrophic conflicts. In other words, colonizing space is very likely a good way to instantiate a suffering risk, or s-risk, by causing astronomical amounts of misery.

(11) 2018. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Technical Report 2, Version 1.2. HERE.

Offers a point-by-point critique of one part of Pinker’s chapter on “existential threats” in Enlightenment Now. I discover multiple mined quotes that Pinker uses to suggest more or less the opposite of what the original authors intended; cherry-picked data from Pinker’s own sources; misleading statements about views that Pinker disagrees with; some outright false assertions; and a general disregard for the most serious scholarship on existential risks and related matters. I claim that it would be unfortunate if Enlightenment Now were to shape future discussions about our existential predicament.

(10) 2018. Who Would Destroy the World? Omnicidal Agents and Related PhenomenaAggression and Violent BehaviorHERE.

Provides a careful look at numerous actual agents throughout history who would almost certainly have brought about an existential catastrophe if only the technological means had been available to them. The aim is to show that there really are people in the world who would happily push a “doomsday button” if one were within finger’s reach. This paper is intended to be part 2 of the more theoretical paper, “Agential Risks and Information Hazards.”

(9) 2018. Agential Risks and Information Hazards: An Unavoidable but Dangerous Topic? FuturesHERE.

Outlines a six-part typology of human and non-human agents who, if given the opportunity, would almost certainly bring about an existential catastrophe. The paper then examines a number of reasons that this topic is important (although neglected), and why the study of “agential risks” deserves to be its own subfield of existential risk studies.

(8) 2017. Moral Bioenhancement and Agential Risks: Good and Bad OutcomesBioethicsHERE.

Argues that, when examined through the framework of agential risks, the moral bioenhancement program that Persson and Savulescu advocate could backfire catastrophically by actually exacerbating certain types of omnicidal agents.

(7) 2016. Agential Risks: A Comprehensive IntroductionJournal of Evolution and Technology. HERE.

This is the first paper to examine the issue of agential risks. It serves as the foundation for much more detailed and sophisticated treatments in Morality, Foresight, and Human Flourishing, “Agential Risks and Information Hazards,” and “Who Would Destroy the World?”

(6) 2011. Emerging Technologies and the Future of PhilosophyMetaphilosophyHERE.

Argues that humanity is confronting two distinct cognitive problems: first, the complexity of the world is simply too great for any one person to navigate it competently, and second, fields like quantum mechanics and philosophy may be focused on puzzles with respect to which we are cognitively closed. The paper then examines whether cognitive enhancement technologies could resolve these issues.

(5) 2011. Technology and our epistemic situation: what ought out priorities to be? ForesightHERE.

(4) 2010. Risk Mysterianism and Cognitive Boosters. Journal of Future StudiesHERE.

(3) 2010. Review of Minds and ComputersTechne: Research in Philosophy and TechnologyHERE.

(2) 2009. Transhumanism, Progress, and the Future. Journal of Evolution and TechnologyHERE.

(1) 2009. A Modified Conception of Mechanisms. ErkenntnisHERE.

Scholarly (non-peer reviewed) Papers:

Crimes Without a Name: On Global Governance and Existential Risks. Forthcoming. HERE.

November 2018. The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis." LessWrong. HERE.

October 2017. Why Superintelligence Is a Threat that Should be Taken SeriouslyBulletin of the Atomic Scientists. HERE.

September 2017. Cosmic Rays, Gravitational Anomalies, and the Simulation Hypothesis. HERE.

 

August 2017. Omnicidal Agents and Related Phenomena. Working draft (long version) HERE.

 

August 2017. How Religious and Non-Religious People View the ApocalypseBulletin of the Atomic ScientistsHERE.

December 2016. Who Would Destroy the World? Bulletin of the Atomic ScientistsHERE.

September 2016. The Clash of Eschatologies: The Role of End-Times Thinking in World History. SkepticHERE.

September 2016. It Matters Which Trend Lines One Follows: Why Terrorism Is an Existential ThreatFree Inquiry. (Also, There's No Time to Wait. Both are responses to Michael Shermer). HERE.

 

September 2016. How likely is an existential catastrophe? Bulletin of the Atomic Scientists. HERE.

 

August 2016. Being Alarmed Is Not the Same as Being an Alarmist. Future of Life Institute. HERE.

 

July 2016. Agential Risks: A New Direction for Existential Risk Scholarship. Technical Report. HERE.

July 2016. Climate Change Is the Most Urgent Existential Risk. Future of Life Institute. HERE.

June 2016. Apocalypse Soon? How Emerging Technologies, Population Growth, and Global Warming Will Fuel Apocalyptic Terrorism in the FutureSkepticHERE.

June 2016. Existential Risks Are More Likely to Kill You Than Terrorism. Future of Life Institute. HERE.

June 2016. The Collective Intelligence of Women Could Save the World. Future of Life Institute. HERE.

May 2016. Three Minutes Before Midnight: An Interview with Lawrence Krauss About the Future of HumanityFree Inquiry, HERE.

 

April 2016. Biodiversity loss: An existential risk comparable to climate changeBulletin of the Atomic ScientistsHERE.

Reposted by the Future of Life Institute, HERE.

Book Reviews:

 

2018. Phil Torres’s Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. By Steven Umbrello. Futures. HERE.

2016. Review of Phil Torres', The End: What Science and Religion Tell Us About the Apocalypse. By John Messerly. The Meaning of LifeHERE. Reposted in Ethical Technology, HERE.

© 2017 Phil Torres