Nick Bostrom and [[Robert Wright]] convinced me that reducing catastrophic and existential risks should be among our top global priorities. In 2013 I decided to make this the focus of my career. In 2014, I joined [80,000 Hours](https://80000hours.org), a project to help people work on this challenge (among others).
For an introduction to Bostrom's views on this, start with [The Vulnerable World Hypothesis](https://nickbostrom.com/papers/vulnerable.pdf). The key idea, from the abstract:
> Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization.
For example:
> Advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default.
The paper begins:
> One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many balls – mostly white (beneficial) but also various shades of gray (moderately harmful ones and mixed blessings). The cumulative effect on the human condition has so far been overwhelmingly positive, and may be much better still in the future (Bostrom, 2008). The global population has grown about three orders of magnitude over the last ten thousand years, and in the last two centuries per capita income, standards of living, and life expectancy have also risen. What we haven’t extracted, so far, is a black ball: a technology that invariably or by default destroys the civilization that invents it. The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky.
Bostrom has come up with or developed many other ideas that seem important, such as transhumanism, the reversal test, information hazards, the unilateralist curse and the parliamentary approach to moral uncertainty.
He founded the Future of Humanity Institute at Oxford and has played a big role in helping futurism and the study of extreme risk from technology transition from “lunatic fringe” into mainstream academic and policy circles.
The following sentence from his 2001 paper on existential risk hit me pretty hard when I first read it, in the A113 seminar room at university:
> There is more scholarly work on the life-habits of the dung fly than on existential risks.
The claim was [still true](https://forum.effectivealtruism.org/posts/dvCuqKS825AqSm7fN/are-there-more-papers-on-dung-beetles-than-human-extinction) in 2018.
He’s also quite funny. E.g.
<div class="video video--youtube">
<iframe src="https://www.youtube.com/embed/Yd9cf_vLviI?modestbranding=1" frameborder="0" allowfullscreen></iframe>
</div>
On his biography, Bostrom articulates a thought that has troubled me since my mid-teens:
> I believe it is likely that we are overlooking one or more crucial considerations: ideas or arguments that might plausibly reveal the need for not just some minor course adjustment in our endeavours but a major change of direction or priority. If we have overlooked even just one such crucial consideration, then all our best efforts might be for naught—or they might even be making things worse.
Bostrom has dedicated his career to helping us discover and appreciate more crucial considerations for the future of humanity. I wish more of our best intellectuals would think as clearly and as generatively as he does, in similarly ambitious terms.
Places to start:
* I run [Radio Bostrom](https://radiobostrom.com/): audio narrations of Bostrom's papers.
* I drafted [an introduction to Nick Bostrom's work](https://forum.effectivealtruism.org/posts/gxLAsWiMvRdcYY7hT/nick-bostrom-an-introduction-early-draft).
* [Vulnerable World Hypothesis](https://nickbostrom.com/papers/vulnerable.pdf) (or [this podcast discussion](https://www.listennotes.com/podcasts/making-sense-with/151-will-we-destroy-the-future-OZju5zDEolh/))
* [Transhumanism FAQ](https://www.nickbostrom.com/views/transhumanist.pdf)
* [The Reversal Test: Eliminating Status Quo Bias in Applied Ethics](http://www.nickbostrom.com/ethics/statusquo.pdf)
* [Crucial Considerations](http://www.stafforini.com/blog/bostrom/)
* [NickBostrom.com](https://nickbostrom.com)