top of page

My research has so far focused on the following topics:

(1) Omnicidal agents (or "agential risks"): who exactly would destroy the world if only it were possible? Which state or nonstate actors would willingly push a "doomsday button" if one were within finger's reach?

(2) The history of the idea of "human extinction": when did the idea that humanity could go extinct in the secular/biological sense first emerge, and why? What can we predict about the future of this idea in an increasingly religious world? (Discussed in a forthcoming book, with Tom Moynihan, titled A Brief History of Human Extinction.)

(3) Conceptual foundations of existential risk studies: what exactly are "existential risks"? Which theoretical models of existential risks, and which definitions, should the research community adopt as it becomes a mature science?

(4) The ethics of human extinction: how bad would human extinction be? Does our answer to this question depend on which ethical theories we adopt, and if so, how and to what extent?

(5) Can the social contract survive the development of dual-use emerging technologies?: what happens when individuals acquire the same violence capacity as states? Can the modern state system remain intact when pretty much anyone anywhere can attack anyone anywhere?

(6) Global priorities: which existential hazards should we prioritize? Should we be as worried about climate change as we should be about artificial intelligence? Or are some risks riskier?

(7) Colonizing space: what reasons do we have for thinking that colonizing space will decrease rather than foment conflict? Could it be that spreading into the solar system, galaxy, and beyond could result in astronomical amounts of suffering?

(8) Omnicide and international criminal law: is omnicide a type of crime against humanity or genocide? Is current international criminal law sufficient for dealing with the growing threat of groups or individuals unilaterally causing human extinction?

(9) Anti-natalism and immortalism: Is it ethically acceptable to create new people? If not, does this necessarily entail that it would be best if humanity were to go extinct? I don't believe so.

bottom of page