© 2017 Phil Torres

Academic Papers:

Transhumanism, Effective Altruism, and Systems Theory: A History and Analysis of Existential Risk StudiesUnder review. HERE.

Global Challenges Foundation (GCF) Report on the Drivers of Global Catastrophic Risks. Co-authoring with Simon Beard. In progress.

Agential Risks and the Apocalyptic ResidualRevise and resubmit. HERE.

This brief paper argues that Nick Bostrom’s term “the apocalyptic residual” will, if widely adopted by scholars, foment confusion about one of the most important issues within the budding field of existential risk studies: agential risks. I explain why this term is problematic, and argue that the word “apocalyptic” should be reserved for, and only for, agents who are explicitly motivated by religious ideologies.

Can Anti-Natalists Oppose Human Extinction?  Revise and resubmit. South African Journal of Philosophy. HERE.

Argues that there is no contradiction in believing that procreation is morally wrong and valuing (highly) the long-term survival of humanity. The link between these two ideas is the possibility of life-extension technologies that could enable a "final generation" to live indefinitely long. The paper then examines a number of interesting implications of "no-extinction anti-natalism" that concern personal identity, mind-uploading, and becoming posthuman.

On Being Afraid of Monsters: Assessing the Greatest Future Risks to Human Survival and Prosperity. Revise and resubmitWorld Futures.

Humanity currently faces a number of existential risks that were unknown to people just a few decades ago. Thus, might our descendants face even more existential risks in the future that we are currently aware of? The present paper examines the particular issue of unknown threats to human survival and prosperity, a phenomenon that I call “monsters.” It offers a comprehensive account of the varieties of the unknown, followed by a historical argument for why monsters could pose (by far) the greatest threat to our lineage in the coming years, decades, and centuries.

International Criminal Law and the Future of Humanity: Toward a Theory of the Crime of Omnicide. Under review. Draft HERE.

Argues that current international criminal law should be augmented to include omnicide, or the intentional destruction of humanity. Claims that omnicide is not a special case of crimes against humanity or genocide, but is distinct from both. I further argue that establishing specialized convention on omnicide, an Omnicide Convention, is urgent given the exponential development of dual-use emerging technologies, which could enable a large number of state and nonstate actors to unilaterally bring about the extinction of humanity.

A Brief History of the Idea of Human Extinction. Under review. (Ask me for a copy.)

Provides a comprehensive exploration of the origin and evolution of the concept of human extinction. The paper consists of three main sections: First, a history of the concept of humanity; second, a history of a concept of extinction; and third, a history of the conjunction of these ideas. I argue that human extinction is a surprisingly recent addition to our shared conceptual repertoire, and then examine the importance of this idea within the nascent field of "existential risk studies." 

How Bad Would Human Extinction Be? Convergent Arguments for Making the Avoidance of Human Extinction a Top Global Priority. Under review. Co-authored with Dr. Simon Beard. Draft HERE.

This paper synthesizes a wide range of distinct ideas that all converge upon the evaluative proposition that human extinction, if it were to occur, would constitute an immense tragedy. It examines and analyzes three general classes of arguments for why human extinction would be very bad, if not the worst possible outcome imaginable: “Anti-extinction views,” “further loss views,” “meaning of life arguments,” and “irreversibility arguments.”

* * * *

(18) 2019 Existential Risks: A Philosophical AnalysisInquiry: An Interdisciplinary Journal of PhilosophyHERE.

Critically examines five definitions of "existential risk" in the scholarly literature. The paper ultimately argues for a pluralistic approach that prescribes the use of different definitions depending on the particular context of use. For example, "existential risks as a significant loss of expected value" is, I argue, the best technical definition, while "existential risks as human extinction or civilizational collapse" is, I suggest, best-suited for lower-level discussions about the long-term future of humanity with the general public.

(17) 2019. The Possibility and Risks of Artificial General IntelligenceBulletin of the Atomic ScientistsHERE.

This paper offers and overview of the risks posed by artificial general intelligence, and especially seed AIs that could rapidly augment their problem-solving capabilities through recursive self-improvement. It also provides some reasons for worrying about "AI denialism," or the irrational dismissal of the potential existential hazards that superintelligent machines could pose.

(16) 2019. The Future of War: Could Lethal Autonomous Weapons Make Conflict More Ethical? AI & Society. Co-authored with Steven Umbrello (lead author) and Angelo F. De Bellis. HERE.

This paper assesses the current arguments for and against the use of lethal autonomous weapons (LAWs). Specific interest is given to "ethical LAWs," which are artificially intelligent weapons systems that make decisions within the bounds of an ethics-based code. It concludes that insofar as genuinely ethical LAWs are possible, they should replace humans on the battlefield.

(15) 2018. Facing Disaster: The Great Challenges FrameworkForesightHERE.

Offers a novel conceptual mechanism for prioritizing global-scale threats to human prosperity and survival. The paper then provides a wide-ranging and highly detailed survey of risks associated with climate change, biodiversity loss, emerging technologies, and machine superintelligence, as well as a number of important subsidiary issues that are currently under-explored in the technical literature.

(14) 2018. Long-Term Trajectories of Human CivilizationForesight. Co-authored with Seth Baum (lead author), Stuart Armstrong, Timoteus Dahlberg, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs Maas, James Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Alexei Turchin, and Roman Yampolskiy. HERE.

This paper examines four possible future trajectories of civilization, namely, status quo, catastrophe, technological transformation, and astronomical trajectories. It argues that status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible.

(13) 2018. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History. In Artificial Intelligence Safety and Security (ed. Roman Yampolskiy). HERE.

Argues that dual-use emerging technologies could fatally undercut the social contract upon which states are founded, and that the only apparent way to rectify this situation is to establish a “post-singularity social contract” between humanity and a superintelligent singleton.

(12) 2018. Space Colonization and Suffering Risks: Reassessing the Maxipok RuleFuturesHERE.

Drawing from evolutionary biology, transhumanist studies, international relations theory, and cosmology, this paper outlines an argument for believing that a colonized universe would be replete with constant, catastrophic conflicts. In other words, colonizing space is very likely a good way to instantiate a suffering risk, or s-risk, by causing astronomical amounts of misery.

(11) 2018. A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now. Technical Report 2, Version 1.2. HERE.

Offers a point-by-point critique of one part of Pinker’s chapter on “existential threats” in Enlightenment Now. I discover multiple mined quotes that Pinker uses to suggest more or less the opposite of what the original authors intended; cherry-picked data from Pinker’s own sources; misleading statements about views that Pinker disagrees with; some outright false assertions; and a general disregard for the most serious scholarship on existential risks and related matters. I claim that it would be unfortunate if Enlightenment Now were to shape future discussions about our existential predicament.

(10) 2018. Who Would Destroy the World? Omnicidal Agents and Related PhenomenaAggression and Violent BehaviorHERE.

Provides a careful look at numerous actual agents throughout history who would almost certainly have brought about an existential catastrophe if only the technological means had been available to them. The aim is to show that there really are people in the world who would happily push a “doomsday button” if one were within finger’s reach. This paper is intended to be part 2 of the more theoretical paper, “Agential Risks and Information Hazards.”

(9) 2018. Agential Risks and Information Hazards: An Unavoidable but Dangerous Topic? FuturesHERE.

Outlines a six-part typology of human and non-human agents who, if given the opportunity, would almost certainly bring about an existential catastrophe. The paper then examines a number of reasons that this topic is important (although neglected), and why the study of “agential risks” deserves to be its own subfield of existential risk studies.

(8) 2017. Moral Bioenhancement and Agential Risks: Good and Bad OutcomesBioethicsHERE.

Argues that, when examined through the framework of agential risks, the moral bioenhancement program that Persson and Savulescu advocate could backfire catastrophically by actually exacerbating certain types of omnicidal agents.

(7) 2016. Agential Risks: A Comprehensive IntroductionJournal of Evolution and Technology. HERE.

This is the first paper to examine the issue of agential risks. It serves as the foundation for much more detailed and sophisticated treatments in Morality, Foresight, and Human Flourishing, “Agential Risks and Information Hazards,” and “Who Would Destroy the World?”

(6) 2011. Emerging Technologies and the Future of PhilosophyMetaphilosophyHERE.

Argues that humanity is confronting two distinct cognitive problems: first, the complexity of the world is simply too great for any one person to navigate it competently, and second, fields like quantum mechanics and philosophy may be focused on puzzles with respect to which we are cognitively closed. The paper then examines whether cognitive enhancement technologies could resolve these issues.

(5) 2011. Technology and our epistemic situation: what ought out priorities to be? ForesightHERE.

(4) 2010. Risk Mysterianism and Cognitive Boosters. Journal of Future StudiesHERE.

(3) 2010. Review of Minds and ComputersTechne: Research in Philosophy and TechnologyHERE.

(2) 2009. Transhumanism, Progress, and the Future. Journal of Evolution and TechnologyHERE.

(1) 2009. A Modified Conception of Mechanisms. ErkenntnisHERE.

Scholarly (non-peer reviewed) Papers:

Crimes Without a Name: On Global Governance and Existential Risks. Forthcoming. HERE.

November 2018. The "Post-Singularity Social Contract" and Bostrom's "Vulnerable World Hypothesis." LessWrong. HERE.

October 2017. Why Superintelligence Is a Threat that Should be Taken SeriouslyBulletin of the Atomic Scientists. HERE.

September 2017. Cosmic Rays, Gravitational Anomalies, and the Simulation Hypothesis. HERE.

 

August 2017. Omnicidal Agents and Related Phenomena. Working draft (long version) HERE.

 

August 2017. How Religious and Non-Religious People View the ApocalypseBulletin of the Atomic ScientistsHERE.

December 2016. Who Would Destroy the World? Bulletin of the Atomic ScientistsHERE.

September 2016. The Clash of Eschatologies: The Role of End-Times Thinking in World History. SkepticHERE.

September 2016. It Matters Which Trend Lines One Follows: Why Terrorism Is an Existential ThreatFree Inquiry. (Also, There's No Time to Wait. Both are responses to Michael Shermer). HERE.

 

September 2016. How likely is an existential catastrophe? Bulletin of the Atomic Scientists. HERE.

 

August 2016. Being Alarmed Is Not the Same as Being an Alarmist. Future of Life Institute. HERE.

 

July 2016. Agential Risks: A New Direction for Existential Risk Scholarship. Technical Report. HERE.

July 2016. Climate Change Is the Most Urgent Existential Risk. Future of Life Institute. HERE.

June 2016. Apocalypse Soon? How Emerging Technologies, Population Growth, and Global Warming Will Fuel Apocalyptic Terrorism in the FutureSkepticHERE.

June 2016. Existential Risks Are More Likely to Kill You Than Terrorism. Future of Life Institute. HERE.

June 2016. The Collective Intelligence of Women Could Save the World. Future of Life Institute. HERE.

May 2016. Three Minutes Before Midnight: An Interview with Lawrence Krauss About the Future of HumanityFree Inquiry, HERE.

 

April 2016. Biodiversity loss: An existential risk comparable to climate changeBulletin of the Atomic ScientistsHERE.

Reposted by the Future of Life Institute, HERE.

Book Reviews:

 

2018. Phil Torres’s Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. By Steven Umbrello. Futures. HERE.

2016. Review of Phil Torres', The End: What Science and Religion Tell Us About the Apocalypse. By John Messerly. The Meaning of LifeHERE. Reposted in Ethical Technology, HERE.