Epistemic status: long-standing hunch; I should really dig into this. Imagine three catastrophes: 1. 1% of humanity die (~78 million people) 2. 90% of humanity die (~7 billion people) 3. 100% of humanity die (~7.8 billion people) We’ll stipulate that event (2) is a global catastrophe from which civilisation could eventually recover, while event (3) is an existential catastrophe—the end of the road. Parfit ends *Reasons and Persons* by point out that the difference in badness between (2) and (3) is far, far greater than the difference between (1) and (2), because (3) entails the non-existence of all future generations too. As a matter of axiology, this claim seems hard to deny, unless: a. you endorse a positive discount rate for welfare. b. you hold a person-affecting view which says that adding a happy person to the world does not make it better. For now, let’s assume that Parfit is right. How big is the difference between (2) and (3)? Good question. For now, it’s enough to say “very, very big”. Impartial axiology should inform our practical priorities—explicitly or not—but it does not determine them. To actually decide what to do, we need to look at how the world is, and factor in our partial values to some degree. In what follows, when I say “global catastrophic risk”, just assume I mean any catastrophe that causes the death of >1% of the human population (but that does not cause the end of civilisation). A focus on deaths may omit some important catastrophic risks but we can ignore this here. ## Catastrophic risk: underrated or overrated? I wonder if some people who were rightly impressed by Parfit’s arguments about the axiology, and Bostrom’s calculations about the potential size of the future, have incorrectly given these considerations too much weight when making their lists of practical priorities. Below I’ll list out some considerations that might push us to think the difference in practical importance between (2) and (3) is much smaller than the difference suggested by consideration of the axiology and the size of the future. ### Empirical considerations * Global catastrophes seem very likely to occur this century. Sadly, “very” here means at least 80%, perhaps the best estimate is even higher. * In the 20th century, the Spanish Flu, WW1 and WW2 each caused the deaths of more than 1% of the human population. * If we narrow the definition of global catastrophe a bit, such that the lower bound is “5% of the human population”, we’d probably still end up with a probability well over 50%. * Bear in mind that Toby Ord forecasts total existential risk within next 100 years at 1/6, even after taking into account his best guesses about what humanity will do to mitigate them. * Probability estimates about catastrophic risk seem more robust than the existential risk estimates (and if they’re not, a path to make them more robust seems within easy reach). * Catastrophic risks may be more tractable than existential risks (e.g. it’s easier to get people to care; mechanisms and feedback loops somewhat easier to understand). * Our ability to recover from catastrophes is very uncertain. What seem like “merely” catastrophic events may actually be existential. Global catastrophes such as a great power war or extreme climate change may be among the largest “existential risk factors”. * Timing: you might think that better opportunities to work on existential risk lie in the future, and that we already have plenty of resources to spend on them (such that saving to spend on existential risk mitigation seems less attractive than spending on catastrophic risk mitigation). * Complementarity: there might be less of a trade off between the two issues than one might suppose: many global catastrophic risks seem to be existential risk factors. Ord expresses concern about complacency here, though (see Appendix 2). * Neglectedness: what are the best estimates for global resources allocated to catastrophic risk mitigation? ### Theoretical / procedural considerations * Simple expected value may be pragmatically inappropriate for thinking about existential risk. Issues here: * Low probability of extreme outcomes. * Managing ongoing risk of ruin. * @TODO I have Gigarenzer / Taleb evolutionary rationality stuff in mind here; flesh this out * There are [various reasons](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) it’s unwise to take expected value calculations literally. * Moral uncertainty: various uncertainties about axiology and normative ethics might substantially narrow the gap between (2) and (3), though it seems unlikely they would flip them. Some candidates: * Impartial axiology probably does not determine ideal action: there is surely some level at which we should bear our partial values in mind (some notes on this in [[Value, reasons, self]]). Seems this could shift things a bunch. * Person-affecting views may have something to them. * Proximity in space or time may actually count for something, despite the apparently striking consensus among professional philosophers. * There might be something morally wrong about simple expected value calculation. * Personal reasons: we are the 90%! Related discussion: * https://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/ * What else? @todo search. ## Appendix 1: just how large is the difference in badness between (2) and (3)? For the empirical side of this question, it seems the key thing is to make some estimates about how much value there could be in the future. To get a rough handle on that, we could estimate how many descendants we could potentially have, then subtract 0.8 billion from that. This operation implies a **vast** difference between (2) and (3): on Nick Bostrom’s most conservative estimate, at least 7 orders of magnitude. Here’s Bostrom with his envelope: > It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical. > > One gets a large number even if one confines one’s consideration to the potential for biological human beings living on Earth. If we suppose with Parfit that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exist for at least 10^16 human lives of normal duration. These lives could also be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitations that could be partly overcome through continuing technological and moral progress. > > However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10^34 years. > > Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 10^54 human-brain-emulation subjective life-years (or 10^71 basic computational operations). > > If we make the less conservative assumption that future civilizations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realized. > > Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere /one millionth of one percentage point/ is at least a hundred times the value of a million human lives. The more technologically comprehensive estimate of 10^54 human-brain-emulation subjective life-years (or 10^52 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere /one billionth of one billionth of one percentage point/ is worth a hundred billion times as much as a billion human lives. > One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any “ordinary” good, such as the direct benefit of saving 1 billion lives. And, further, that the absolute value of the /indirect/ effect of saving 1 billion lives on the total cumulative amount of existential risk — positive or negative — is almost certainly larger than the positive value of the direct benefit of such an action.10 ## Appendix 2: Existential security factors, risk factors, and what to work on In The Precipice, Toby Ord writes: > Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help protect us. And many social ills may be existential risk factors. In other words, there may be explanations grounded in existential risk for pursuing familiar, common-sense agendas. > > I want to stress that this is a dangerous observation. For it risks a slide into complacency, where we substitute our goal of securing our future with other goals that may be only loosely related. Just because existential risk declines as some other goal is pursued doesn’t mean that the other goal is the most effective way to secure our future. Indeed, if the other goal is commonsensically important there is a good chance it is already receiving far more resources than are devoted to direct work on existential risk. This would give us much less opportunity to really move the needle. I think it likely that there will only be a handful of existential risk factors or security factors (such as great-power war) that really compete with the most important existential risks in terms of how effectively additional work on them helps to secure our future. Finding these would be extremely valuable. Toby seems concerned to prevent people who are working on some commonsensically important area from complacently telling themselves that they are doing the best they can to help reduce existential risk. This seems legit.